Wrapped in Lew Papers: The psychology of climate psychologization – Part1

lewpaperGuest essay by Andy West

∙First of 3 posts examining papers by Lewandowsky & co-authors before ‘conspiracy ideation’ claims. These papers warn of cognitive bias effects, all of which occur in the CAGW Consensus, confirming it is heavily biased. Can’t admit this? Skeptics exposing the dilemma? So… push skeptics beyond the pale, minimizing cognitive dissonance.

Psychologist Stephan Lewandowsky’s ‘conspiracy ideation’ papers (‘Moon hoax’ and ‘Recursive Fury’) that link climate skeptics to generic belief in ‘way out there’ conspiracies, have generated a great deal of traffic in the climate blogosphere and the media. Not least regarding pretty much inarguable challenges to their detailed methodology and data collection, the legitimacy of such approval procedures as occurred, and even the ethics of the papers; essentially the entire validity of these works. Indeed ‘Recursive Fury’ was eventually withdrawn from the journal Frontiers of Psychology on ethical grounds.

However prior papers from Lewandowsky (with various co-authors) identify and warn us about a list of major cognitive bias effects in society, all of which occur within the social phenomenon† of Catastrophic Anthropogenic Global Warming (CAGW), and strongly contribute to the dominance of this phenomenon. Constrained so tightly by his own findings, wrapped if you will in Lew papers, yet apparently possessing a worldview that is highly challenged by any questioning of the climate change ‘Consensus’ (note: a challenge to worldview is itself one of the warnings), any attempt by Lewandowsky to analyze rising world skepticism is very likely to have resulted in a polarized outcome: Either a wholesale rejection of the climate Consensus based upon the belated realization that all his above warnings apply to CAGW, and must always have applied, or an attempt to place skeptics beyond the pale, which hence might preserve a pre-existing worldview and prevent the head-on intellectual and emotional crash of the bias list with the behavior of the Consensus. It seems that the latter course was taken. While I have some sympathy for anyone caught in such an excruciating position, and the resultant behavior in these circumstances is typically not fully conscious, the debacle described in the second paragraph above seems very much like a desperate and sustained attempt to reduce cognitive dissonance.

This short series of posts does not delve further into the tangle surrounding Lewandowsky’s recent jaunt into conspiracy ideation, represented by Moon Hoax / Recursive Fury and other papers. Instead I explore, in detail, warnings about cognitive bias that came mainly before that jaunt. In this first post, each of the warnings is detailed by type. In the second post, the excellent applicability of each warning to CAGW is demonstrated. And finally the clash of these warnings with pre-conceptions is examined in the third post, a clash that psychologists and academia generally should heed regarding climate change perceptions. The three posts together form an extensive look at climate psychologization, using Lewandowsky’s work and stance as a prominent example case and framework, demonstrating that bias has blinded the discipline of psychology and prevented it from applying established principles and past findings (even about bias!) to the climate domain, which in turn has led to grossly erroneous conclusions. Along the way we glimpse the root causes of, and flawed treatments for, climate depression aka eco-anxiety aka apocalypse fatigue, and open a useful window onto the fundamental workings of Consensus culture itself.

Note: no original work on psychology is contributed in these posts; the conclusions of the prior papers from Lewandowsky and associated authors are taken at face value, with explanatory comment but without significant critique. And no prior knowledge of psychology is required to follow this series, which is (hopefully!) broken down into a logical trail of modest steps that folks can follow. This first and shorter post introduces the list of cognitive biases.

The first warning type can be termed ‘worldview bias’. The paper Misinformation and Its Correction: Continued Influence and Successful Debiasing by Lewandowsky et al (L2012), is about the spread of misinformation plus strategies to correct this misinformation and counteract its damaging effects. Yet the paper spends a lot of time on the role that an individual’s pre-existing worldview plays in absorbing misinformation in the first place, and likewise in being resistive to later concerted attempts at correction. In a short article summarizing the paper at the Association for Psychological Science online, the authors state that: “Individuals’ pre-existing attitudes and worldviews can influence how they respond to certain types of information”. By ‘information’ here they are referring both to original misinformation and also to corrected information. L2012 contains numerous quotes about how one’s worldview can seriously bias the processing of incoming information, including the rejection of information that challenges the held worldview, maintaining misinformation that supports it, and even (via a ‘backfire’ effect) perceiving misinformation corrections as a justification or reinforcement of the held worldview. E.g.

It is possible that one’s worldview forms a frame of reference for determining, in Piaget’s (1928) terms, whether to assimilate information or to accommodate it. If one’s investment in a consistent worldview is strong, changing that worldview to accommodate inconsistencies may be too costly or effortful. In a sense, the worldview may serve as a schema for processing related information (Bartlett, 1977/1932), such that relevant factual information may be discarded or misinformation preserved.

And also this:

Thus far, we have reviewed copious evidence about people’s inability to update their memories in light of corrective information and have shown how worldview can override fact and corrections can backfire.

As long as taken in the lightweight form (worldview likely doesn’t have an overwhelming influence in all cases; there’s evidence that within some topic domains at least, particular worldviews appear to introduce only modest bias), then this position seems both consistent with other literature and also not particularly controversial. Indeed it seems like common sense when considered as a part of the expression below, also from L2012 (and if we take worldview as a subset of ‘assumed truths’):

As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974).

While it appears that more discovery on the underlying psychological mechanisms is required, and no doubt there are challenge positions, let’s just take it as read here that L2012 is right and one’s worldview has a strong influence (the paper is emphatic at various places) on the information one does or does not accept as ‘true’. Whatever the reader’s opinion, the important thing in this post is what the opinion of psychologists is, with Lewandowsky and co-authors as our main example J. So the type one warning amounts to: ‘beware of the bias from one’s worldview’.

Type two concerns incoming information that contains a significant emotional component. The paper Theoretical and empirical evidence for the impact of inductive biases on cultural evolution by Griffiths et al (G2008, one of the other authors is Lewandowsky), includes this paragraph:

Sperber (1996, p. 84) states that ‘the ease with which a particular representation can be memorized’ will affect its transmission, and Boyer (1994, 1998) and Atran (2001) emphasize the effects of inductive biases on memory. This idea has some empirical support. For example, Nichols (2004) showed that social conventions based on disgust were more likely to survive several decades of cultural transmission than those without this emotional component. This advantage is consonant with the large body of research showing that emotional events are often remembered better than comparable events that are lacking an emotional component (for a review, see Buchanan 2007).

This quote is a little hard to grasp outside the context of the paper, but says that (mental representations of) social conventions or cultural concepts with an emotional component are easier to memorize, which appears to result in them also being retained for longer plus better transmitted to others in society, than would be the case for concepts minus the emotional load. Social conventions based on ‘disgust’ are an example explored by Nichols, but in fact other literature referred to in this quote (and also elsewhere), suggests that the same effect occurs for a range of different emotive stimuli which can be carried within generic information. The word ‘advantage’ applied to the Nichols example here presumably refers to the enhanced prospering of the concept itself, and perhaps also because ‘disgust’ would typically accompany concepts deemed unhealthy for society. So in short, concepts that include an emotive load will possess an arbitrary bias in their favor.

This same point also appears back within L2012, while positing that because an emotive load strongly affects the prospects of (generic) information being passed on, quoting Peters et al in support, then this should also hold for misinformation (which is the main theme for L2012). I.e. an emotive load should have an effect on the degree to which misinformation both spreads and persists.

Concerning emotion, we have discussed how misinformation effects arise independently of the emotiveness of the information (Ecker, Lewandowsky, & Apai, 2011). But we have also noted that the likelihood that people will pass on information is based strongly on the likelihood of its eliciting an emotional response in the recipient, rather than its truth value (e.g., K. Peters et al., 2009), which means that the emotiveness of misinformation may have an indirect effect on the degree to which it spreads (and persists). Moreover, the effects of worldview that we reviewed earlier in this article provide an obvious departure point for future work on the link between emotion and misinformation effects, because challenges to people’s worldviews tend to elicit highly emotional defense mechanisms (cf. E. M. Peters, Burraston, & Mertz, 2004).

There is an important extra observation on the end of this quote regarding the inter-relatedness of emotive content and worldview. To some extent, the emotive load is in the eye of the beholder. Information (or misinformation) that strongly challenges a specific worldview may produce an emotive response in one individual but not in another, arousing in the former ‘highly emotional defense mechanisms’. What the quote doesn’t say is that this is a subset. Information (or misinformation) that powerfully promotes a specific worldview may likewise produce a strong emotive response, e.g. euphoria, self-justification, enhanced feelings of security and identity [to worldview-aligned social entities]. While no doubt future work is required as the paper suggests, the implication is that these implicit emotional responses will add or subtract to any explicit emotional content, aiding transmission still further in the case of a worldview alignment, and attenuating it in the case of a worldview clash (and possibly for the latter, spawning the transmission of countering information). This would cause a social amplification of any polarization that already existed regarding the perceived ‘truth’ of the original information. (Note: I used inverted commas on the word truth, only to remind folks that for many speculative and / or complex concepts not examined long in retrospect, there would already be some fuzziness and interpretation regarding the level of truth, even without any emotional interference).

I mention in passing that this part of the above quote: ‘But we have also noted that the likelihood that people will pass on information is based strongly on the likelihood of its eliciting an emotional response in the recipient, rather than its truth value (e.g., K. Peters et al., 2009)’, is a major contributor to the spread of memes; in terms of narrative success, emotive punch is rewarded more than veracity. The lens of memetics is extremely useful for examining this whole area, but I digress and we are staying with Lewandowsky’s work here.

So, the type two warning amounts to: ‘beware of the bias from emotive content’, to which we might also add the rider for any particular information: is there implied emotional content, essentially via a powerful type 1 reaction, which may enhance or attenuate any explicit emotive bias?

Type three concerns the Continued Influence Effect (CIE). The paper Explicit warnings reduce but do not eliminate the continued influence of misinformation by Ecker et al (E2010, one of the other authors is Lewandowsky), neatly explains the CIE in this paragraph:

For example, H. M. Johnson and Seifert (1994) presented participants with a story about a fictitious warehouse fire, allegedly caused by volatile materials stored carelessly in a closet. Participants were later told that the closet had actually been empty. Although participants later remembered this retraction, they still used the outdated misinformation to make inferences; for example, people might argue that the fire was particularly intense because of the volatile materials or that an insurance claim may be refused due to negligence. H. M. Johnson and Seifert (1994) termed this reliance on misinformation the continued influence effect (CIE). The CIE is robust and occurs in a variety of contexts, regardless of the particular story being presented and regardless of the test applied (Ecker et al., in press; H. M. Johnson & Seifert 1994, 1998; Wilkes & Reynolds, 1999).

The paper is available here:- http://rd.springer.com/content/pdf/10.3758%2FMC.38.8.1087.pdf (You may need to cut and paste this link into your browser, for some reason it doesn’t work direct for me). Ecker works at the ‘cogsci’ cognitive science lab of the University of Western Australia, where Lewandowsky also worked before moving to Bristol University in the UK.

E2010 demonstrates the robustness and persistence of the CIE via reference to various real-world examples, such as reports relating to Weapons of Mass Destruction in the Iraq conflict, reports relating to the alleged link between autism and vaccines, a New York Times article suggesting that China had directly benefitted from a crisis in the US economy, and the pseudo real-world example of laboratory analogues of court proceedings. E.g.

The continued influence of misinformation is also detectable in real-world settings. For example, during the 2003 invasion of Iraq, the public was exposed to countless hints that weapons of mass destruction (WMDs) had been discovered in Iraq. Even though no such report was ever confirmed, these constant hints were powerful enough to engender, in a substantial proportion of the U.S. public, a longstanding belief in the presence of WMDs that has persisted, even after the nonexistence of WMDs became fully evident (Kull, Ramsay, & Lewis, 2003; Lewandowsky, Stritzke, Oberauer, & Morales, 2005). Unconfirmed hints can thus engender false memories in the public (analogous to the “sleep” example presented at the outset) that resist subsequent correction (analogous to the warehouse fire example above).

I don’t intend to pursue here the exploration of potential mechanisms for the CIE within E2010 or other papers, except to include this summarizing paragraph:

The CIE typically has been explained by reference to a mental-event model that people build when trying to understand an unfolding event (H. M. Johnson & Seifert,1994; van Oostendorp, 1996; Wilkes &Leatherbarrow,1988). On this view, a retraction of central information creates a gap in the model, and—because people are apparently more willing to accept inconsistencies than they are voids in their event model—they continue to rely on misinformation. That is, people prefer to retain some information in crucial model positions (e.g., what caused something to happen or who was involved), even if that information is known to be discredited (H. M. Johnson &Seifert, 1994; van Oostendorp & Bonebakker, 1999).

It is notable that the above quote is directly followed by this:

Previous efforts [i.e. before the experiments described in this paper] to reduce the CIE have been pursued along various lines, most of which have remained unsuccessful.

The CIE would appear to be extremely tenacious, and remains influential even when considerable efforts to negate it are undertaken, for instance via repeated high-profile retractions or corrections of information that was later found to be wrong. E2010 further states [my underline]:

‘Contrary to the ease with which false memories can be created and true memories altered, the elimination of memories for information that is later revealed to be false—we refer to this as misinformation—has proven to be considerably more difficult. Misinformation continues to affect behavior, even if people explicitly acknowledge that this information has been retracted, invalidated, or corrected (Ecker, Lewandowsky, & Apai, in press; Ecker, Lewandowsky, Swire, & Chang, 2010; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998; Seifert, 2002; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999).’

E2010 describes two modest experiments aimed at combating the CIE, run on 125 and 92 test subjects respectively. The following quote from the abstract for the paper summarizes the results of those tests:

The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning—giving detailed information about the continued influence effect (CIE)—succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning—reminding people that facts are not always properly checked before information is disseminated—was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether.

So, even when subjects are explicitly warned beforehand that such a thing as the CIE exists, and then are also told afterwards that certain information given to them in the experiment was false, along with a clear and plausible explanation as to why the original information was false, the CIE is still not eliminated. I.e. subjects still displayed some level of belief in the false information they’d received. The mention about the level of fact checking in the above quote is also very important; more generically this emphasizes the fact that uncertainties within information, whatever their source, should be clearly communicated, even though this has to happen in conjunction with other efforts in order to be truly effective against the CIE, should information later prove to be partially or wholly in error. This concept is crucial within the CAGW debate, and so we’ll return to it later.

One factor that can help towards reducing the CIE is a suspicion towards the source(s) of information that may later turn out to be false. In other words, possessing a healthy skepticism (e.g. regarding the potential politicization of the source). It seems that a skeptical stance considerably reduces the CIE. Two other papers with Lewandowsky as lead author are referenced by E2010 in support of this finding, and please take a moment to truly absorb the underlined text at the bottom of this quote [my underline]:

The second factor that seems to reduce the CIE is suspicion toward the source of the misinformation. In the WMD studies discussed earlier, belief in the existence of MDs in Iraq was correlated with support for the war and was especially pronounced in those people who obtained news from sources that supported the invasion (e.g., Fox News; Kull et al., 2003). Lewandowsky et al. (2005) uncovered a more direct link between suspicion and the ability to update misinformation related to the Iraq War. They operationalized suspicion as the extent to which respondents doubted the official WMD-related reasons for the invasion. Lewandowsky et al. (2005) found that, when this measure was used as a predictor variable, it explained nearly a third of the variance in people’s belief in misinformation. Moreover, once suspicion was entered as a predictor, previously striking mean differences between respondents in the U.S. and two other countries (Germany and Australia) disappeared and were, instead, found to reflect differing degrees of suspicion between those countries. Lewandowsky, Stritzke, Oberauer, and Morales (2009) extended the notion of suspicion by suggesting that it may be related to a more stable personality trait of skepticism—skeptics will generally tend to question the motives behind the dissemination of information.

Yes, that’s right. Lewandowsky et al are suggesting here that skepticism is a stable personality trait which makes those who possess it less subject to the influence from misinformation, more able to update their position in the light of corrections; a finding that can only mean skepticism is in fact a positive and healthy trait. Lewandowsky echoes this in L2012 [underline = section heading]:

Skepticism: A key to accuracy. We have reviewed how worldview and prior beliefs can exert a distorting influence on information processing. However, some attitudes can also safeguard against misinformation effects. In particular, skepticism can reduce susceptibility to misinformation effects if it prompts people to question the origins of information that may later turn out to be false.

Well, that makes sense. But given the position of Lewandowsky in the climate debate, plus his attempted use of psychology to paint climate-catastrophe skeptics as ‘deniers’ and way-out conspiracy theorists, this insight is highly ironic to say the least.

In the paper Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction by Ecker et al (E2011, one of the other authors is Lewandowsky), we are further warned that the CIE cannot wholly be eliminated by any retraction method known to date:

…however, the finding that retractions never eliminate continued influence altogether is pervasive and robust.

Worse still, E2011 also finds that if there is a cognitive load at the time of absorbing any retractions (i.e. the subject’s attention is divided), then these will be much less effective. It occurs to me that this opens an avenue for those who are compelled to retract (e.g. by the law) yet actively seek to lessen the retraction’s impact, for instance by choosing the style of delivery. Further, mixed messages in the retraction itself may create an ‘automatic’ cognitive load, plus explicit emotive content might also add or subtract from the retraction’s effectiveness, per the sections above, as desired. One cannot always assume that retractions themselves will be neutral, even if they are supposed to be.

E2011 concludes that [my underline]:

The practical implications of the present research are clear: If misinformation is encoded strongly, the level of continued influence will significantly increase, unless the misinformation is also retracted strongly. Hence, if information that has had a lot of news coverage is found to be incorrect, the retraction will need to be circulated with equal vigor, or else continued influence will persist at high levels. Of course, in reality, initial reports of an event, which may include misinformation (e.g., that a person of interest has committed a crime or that a country seeks to hide WMDs), may attract more interest than their retraction. Moreover, retractions apparently need full attentional resources to become effective; hence, retractions processed during conditions of divided attention (e.g., when listening to the news while driving a car) may remain ineffective.

I think ‘significantly increase’ in this context essentially means ‘spread strongly within society’.

So, the type three warning amounts to: ‘beware of the bias from the CIE’, which it appears can never be wholly eliminated. Further, we are told that unless specific warnings about uncertainty in information (e.g. from lax fact checking, and implied from any other source) plus the possibility of being misled, are given ahead, then the resulting bias from any information that turned out not to be wholly true, will be significant. And any retraction will need to be circulated with equal vigor, otherwise the created bias will not be significantly reduced.

Warning type four concerns the repetition of persuasive messages and the ‘third person’ effect.

From the abstract for the video presentation Scientific Uncertainty in Public Discourse: The Case for Leakage Into the Scientific Community (L2014), which presentation was given by Lewandowsky as part of the AGU Chapman Conference on Communicating Climate Science: A Historic Look to the Future, is this concerning the ‘third person effect’ and the constant repetition of a message:

To illustrate with an example, the well-known “third-person effect” refers to the fact that people generally think that others (i.e., third persons) are affected more by a persuasive message than they are themselves, even though this is not necessarily the case. Scientists may therefore think that they are impervious to “skeptic” messages in the media, but in fact they are likely to be affected by the constant drumbeat of propaganda.

You can find the video along with the text of the abstract at Watts Up With That here.

Yes, yes I know, Lewandowsky cites ‘(climate) skeptic propaganda’ as his proffered example of the third person effect, I’ll return to the context and validity of this of this in the later posts. But the point is that L2014 cites the third person effect as having a real influence upon folks, an influence that typically will be increased by narrative repetition. Plus it is indeed a well-known effect, which therefore will receive no further explanation here, yet does play a part in the dominance of the climate Consensus.

That’s probably enough causes of major bias. To summarize these as warnings for an individual seeking to avoid bias, the various papers by Lewandowsky and associated authors include the following wisdom:

Type 1: Beware of the bias from one’s worldview.

Type 2: Beware of the bias caused by explicit emotive content.

Type 2A: Beware of implied emotional content, which via a powerful type 1 reaction may

enhance or attenuate Type 2 (essentially an interaction of 1 & 2).

Type 3: Beware of the bias from the CIE, which can never be wholly eliminated.

Type 3A: Beware of information that does not come with health warnings.

Type 3B: Try to be aware of corrections / retractions; be suspicious if these are not on a par

with the vigor of the original information transmission.

Type 3C: Be healthily skeptical; suspicions based on innate skepticism reduce the CIE.

Type 4: Beware of the ‘third person effect’, especially for oft repeated / saturating information.

I figure that by this point, a lot of readers can already see how this list of warnings about cognitive bias is directly applicable to the dominant environmental culture promoted by the CAGW Consensus J. Yet given that this biased dominant culture will slip towards every possible means to misunderstand objective analysis, then explanations have to be both very clear and very thorough, including implications, and also based as far as is possible (I think I’ve managed this exclusively) upon data from the Consensus itself, hence avoiding um… denial. So that is the job of the next two posts.

Before signing off on this post, I’ll point out that I have not looked into any of the experimental methods or math described in some of the referenced papers (and such basic skills as I once had in statistics have atrophied decades ago anyhow!) As mentioned above the conclusions are taken at face value, and while to some extent what matters in this series is that Lewandowsky and associated authors believe them, in my limited experience the conclusions appear to mesh reasonably with other literature and are not far out on a limb, as one can only conclude regarding the Moon Hoax / Recursive Fury papers. And while the variety of real-world examples invoked is I suppose rather narrow (for instance E2010, E2011, L2012 and other Lewandowsky contributions all feature information about ‘Weapons of Mass Destruction’ during the Iraq war), which may allow in some bias by the back door, there nevertheless seems to be a laudable attempt at objectivity, plus conclusions that do not appear to all come out as weighted in a single direction. For instance L2012 contains various statements that might surprise those only familiar with Lewandowsky’s conspiracy ideation work and climate related articles (which generally have strong alignment to alarmist positions from governments, NGOs, and academic press releases that promote climate alarmism, plus include attempts to characterize climate skeptics as beyond the pale). Here are some of those statements:

Governments and politicians can be powerful sources of misinformation, whether inadvertently or by design.

The magnitude of opposition to GM foods seems disproportionate to their actual risks as portrayed by expert bodies (e.g., World Health Organization, 2005), and it appears that people often rely on NGOs, such as Greenpeace, that are critical of peer-reviewed science on the issue to form their opinions about GM foods (Einsele, 2007). These alternative sources have been roundly criticized for spreading misinformation (e.g., Parrott, 2010).

For example, after a study forecasting future global extinctions as a result of climate change was published in Nature, it was widely misrepresented by news media reports, which made the consequences seem more catastrophic and the timescale shorter than actually projected (Ladle, Jepson, & Whittaker, 2005). These mischaracterizations of scientific results imply that scientists need to take care to communicate their results clearly and unambiguously, and that press releases need to be meticulously constructed to avoid misunderstandings by the media (e.g., Riesch & Spiegelhalter, 2011).

Added to which list, the important insight about skepticism from the same paper is worth repeating:

Skepticism: A key to accuracy. We have reviewed how worldview and prior beliefs can exert a distorting influence on information processing. However, some attitudes can also safeguard against misinformation effects. In particular, skepticism can reduce susceptibility to misinformation effects if it prompts people to question the origins of information that may later turn out to be false.

Given an understanding that governments and NGOs can be potent sources of misinformation regarding, say, weapons of mass destruction or GM crops, it seems at best highly inconsistent to give them a free pass regarding the cultural juggernaut of CAGW. And the 3rd quote is even related to climate change! So once upon a time at least, it seems Lewandowsky acknowledged that bias towards the catastrophic point of view can occur within this domain. Yet what proportion of climate change related academic or NGO press releases actually take heed of the above advice in L2012? I guess WUWT on its own has probably racked up a log of hundreds that are most certainly ambiguous, resulting frequently in mischaracterized results, sometimes wildly so (even w.r.t. the IPCC technical reports as ‘the norm’). And what proportion are well-written and accurate, especially where doing this would disadvantage the Consensus? Or at least not promote it. Given the cumulative feedback effect of the totality of press releases upon the course of climate science over decades, how can psychologists believe that all those poor ones won’t be causing very significant bias?

None of the social effects occurring within the domain of CAGW are new, and cognitive mechanisms underlying these effects are bequeathed to us from the evolutionary trajectory of homo-sapiens-sapiens. Psychology has made slow but useful progress in understanding these mechanisms over the last 150 years. But as confirmed in the follow-on posts, possessing knowledge of cognitive bias by no means guarantees protection against it; through avoiding their own collision of worldviews Lewandowsky and many of his colleagues simply don’t apply these hard-won findings to the entire social landscape of the climate change domain. Instead, they appear to assume that the dominant paradigm is magically free of bias, and focus only upon negative reactions to that paradigm, namely inaction and skepticism; within the latter of course some bias will indeed be found. However, most psychologists soon find themselves mired deep in tar regarding public inaction, the apparently inexplicable riddle of a largely unmoved rump of the public; kind of ‘inert skeptics’. They find only puzzlement, or a string of secondary strength explanations like psychological distancing or issue fatigue, and many more (we find in this series the real reason). At least the practitioners who stop at this point realize that such large numbers of people can’t be dismissed as ‘out there’, nor can the tiny community of ‘active skeptics’ really be driving them all. A few practitioners nevertheless persist in trying to assign various degrees of villainy, fingering the ‘deniers’. Lewandowsky has travelled the extra mile of the mythic, beyond the merchants of doubt meme and on to a highly improbable theory of conspiracy ideation. One psychologist at least, PhD candidate in Social Psychology Jose Duarte, has been brave enough to call out ‘Moon Hoax’ and ‘Recursive Fury’ in the strongest terms (‘this is fraud’, ‘wildly unethical’). I wonder how many of his silent colleagues in the discipline are ‘only’ biased, or instead, afraid of condemnation for stepping out of the Consensus line?

Probing deep into Consensus culture and peeling off the surface of climate pyschologization, the rest of this series presents extensive evidence supporting the above paragraph, examining Lew and crew’s list of cognitive biases presented here in relation to the entire social domain of climate change. It becomes very clear that this list is in fact excellently applicable to the dominant culture of the catastrophic within that domain, clear also that acknowledging the truth of this would cause a severe clash of worldviews with reality for many folks, including Lewandowsky and it seems (at least from a first pass search) almost all psychologists who have stuck their fingers into the muddy mess of climate change mind-sets.

Andy West : www.wearenarrative.wordpress.com

Footnote

The social phenomenon of CAGW is largely independent of anything that is happening in the climate, and whether this is good, bad or indifferent. CO2 worries acted as a trigger, but once triggered the social processes have a developmental trajectory of their own. While scientific uncertainties surrounding the wicked problem of understanding the climate system remain broad, science will be unable to constrain or shut down these social processes, which are currently dominated by a culture of catastrophe.

Main Reference Papers

L2014 = abstract for the video presentation Scientific Uncertainty in Public Discourse: The Case for Leakage Into the Scientific Community, by Lewandowsky. Video and text of the abstract at WUWT.

L2012 = Misinformation and Its Correction: Continued Influence and Successful Debiasing, by Lewandowsky et al.

E2011 = Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction, by Ecker et al (one of the other authors is Lewandowsky).

E2010 = Explicit warnings reduce but do not eliminate the continued influence of misinformation, by Ecker et al (one of the other authors is Lewandowsky). You may need to cut and paste this link into your browser: http://rd.springer.com/content/pdf/10.3758%2FMC.38.8.1087.pdf

G2008 = Theoretical and empirical evidence for the impact of inductive biases on cultural evolution, by Griffiths et al (one of the other authors is Lewandowsky).

0 0 votes
Article Rating
89 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
November 6, 2014 12:38 am

Thanks, Anthony

Brute
Reply to  andywest2012
November 6, 2014 12:41 pm

Lewandowsky is a genius and I, for one, cannot wait for his work on irony.

rogerthesurf
Reply to  Brute
November 6, 2014 1:53 pm

Unfortunately, ironic as AGW may be, it hasn’t stopped governments around the world from steadily robbing the tax payers cash, steadily repressing everyone in the name of AGW and generally repressing the world economies.
Cheers
Roger
http://www.rogerfromnewzealand.wordpress.com

brc
Reply to  andywest2012
November 6, 2014 5:25 pm

Constrained so tightly by his own findings, wrapped if you will in Lew papers, yet apparently possessing a worldview that is highly challenged by any questioning of the climate change ‘Consensus’ (note: a challenge to worldview is itself one of the warnings), any attempt by Lewandowsky to analyze rising world skepticism is very likely to have resulted in a polarized outcome: Either a wholesale rejection of the climate Consensus based upon the belated realization that all his above warnings apply to CAGW, and must always have applied, or an attempt to place skeptics beyond the pale, which hence might preserve a pre-existing worldview and prevent the head-on intellectual and emotional crash of the bias list with the behavior of the Consensus.

In the interest of constructive criticism, please consider some full stops. Trying to parse this gigantic unbroken sentence made me abandon the rest of your article. Lifes to short to read blog posts in this type of impenetrable verbiage. Which is disappointing, because I’m sure it was interesting.

Reply to  brc
November 7, 2014 7:59 am

A colon (there’s one halfway down) is generally considered an equivalent break / breather to a full stop, albeit one introducing a subsiduary list / clause. This is acknowledged (in US English at least) via the normal capitalisation after the colon.

Scottish Sceptic
November 6, 2014 1:16 am
Otter (ClimateOtter on Twitter)
Reply to  Scottish Sceptic
November 6, 2014 1:33 am

Was that written before of after the NYT admitted that Bush & Blair were RIGHT about WMDs?

Scottish Sceptic
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 1:39 am

It was written before the summer.

Scottish Sceptic
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 1:45 am
Barry
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 6:14 am

Bush and Blair thought there was an active WMD program, which there was not, according to the NYT.
“All had been manufactured before 1991, participants said. Filthy, rusty or corroded, a large fraction of them could not be readily identified as chemical weapons at all. Some were empty, though many of them still contained potent mustard agent or residual sarin. Most could not have been used as designed, and when they ruptured dispersed the chemical agents over a limited area, according to those who collected the majority of them.
In case after case, participants said, analysis of these warheads and shells reaffirmed intelligence failures. First, the American government did not find what it had been looking for at the war’s outset, then it failed to prepare its troops and medical corps for the aged weapons it did find.”
[The NY Times has recanted (corrected) that story this past month, now acknowledging there were many thousand chemical and biological weapons in Iraq before the war. .mod]

catweazle666
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 6:30 am

One wonders what he needed 550 tons of yellowcake uranium for.
500 tons of uranium shipped from Iraq, Pentagon says
http://edition.cnn.com/2008/US/07/07/iraq.uranium/
Pentagon Completes Secret Shipment Of 500 Tons Of Uranium From Iraq To Canada
http://www.nationalreview.com/the-feed/3960/pentagon-completes-secret-shipment-500-tons-uranium-iraq-canada
Perhaps he was going to make glow-in-the-dark garden gnomes.

Owen in GA
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 6:43 am

Some of the papers captured in the palaces indicated that, like Bush and Blair, Pres Hussein thought he had an active WMD program, so I can see why they might have been confused. If your opponent acts and speaks as though he has some capability, only a fool assumes he does not.

more soylent green!
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 8:02 am

They were only half-right. Yes, tons of WMD and tons of yellowcake ore were found. But no WMD factories were found. Admittedly, chemical factories have multiple uses. There is little evidence of active WMD programs. OTH, Saddam had plenty of warning and plenty of time to hide his tracks before the invasion. Under questioning by the FBI, Saddam said there were no active WMD manufacturing programs but that he planned to restart them soon.
So it’s a very mixed bag. Why the White House hide what they did find is a head scratcher.

Louis
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 10:52 am

BTW Barry, Bush and Blair were not the only ones who thought Iraq had WMDs. Bill Clinton, Hillary, Gore, Kerry, and many others are on video claiming that Saddam was reactivating his WMD program. Hillary even talked about the dangers of his nuclear program. They called for a regime change in Iraq in 1998, before Bush was President. So, if Bush lied, he was only repeating the lies of the Democrats who came before him.

Duster
Reply to  Otter (ClimateOtter on Twitter)
November 6, 2014 12:08 pm

Why the Whitehouse wasted any time on Iraq when the situation in Afghanistan was what it was is the head scratcher. As it was, knocking down Iraq and the Baathists could only destabilize the entire region and anyone that was aware of the tensions between the various sects of Islam should have known that. The Iranians would not want a predominantly Sunni power on their border, and likewise the Sunni-dominated powers to the west would not want a Schiite power on their borders either. Iraq currently occupies a parallel, unenviable geopolitic situation, similar to what Israel experienced 3,000 years ago, stuck between Egypt, the Hittites, the Assyrians, and later the Persians.

Reply to  Otter (ClimateOtter on Twitter)
November 7, 2014 7:15 am

Seemed to me at the time that Saddam was always going to hedge about WMDs, for the very simple, nay – really simple and OBVIOUS reason – that there was now way he would let Iran think he was defenceless in this area.

November 6, 2014 1:22 am

I enjoyed this article, so thanks, Andy. It could have been more concise but it never hurts to be thorough. I used to be an Old Earth Creationist and I can relate to some of the mechanisms you talked about.
Perhaps this paper may be of interest, seeing as autism and vaccines were mentioned in passing:
http://www.translationalneurodegeneration.com/content/pdf/2047-9158-2-25.pdf
Its conclusion begins, “The present study provides new epidemiological evidence supporting an association between increasing organic-Hg exposure from Thimerosal-containing childhood vaccines and the subsequent risk of ASD diagnosis.”
I mention it for the sake of being up-to-date on the issue. Just like you can’t ‘fix’ climate with simplistic (and expensive) measures, you can’t fix a complex physiological problem with them, either.
I look forward to the next instalments.

November 6, 2014 1:41 am

Anyway, who believes in a fact doesn’t change that fact.

Alx
Reply to  kalya22
November 6, 2014 5:46 am

True facts are facts, but beliefs are very iportant.
Some facts are easy, the sun rises everyday, not too much debate about that fact. But the issue with many facts, even ones agreed upon, are that they are perceived differently in eveyones head.
Oh this sucks, I hate sunrise, Time to go to work.
Oh sunrise, what a glorious day for fishing.
Oh sun, what a great gift from God my benefactor.
Oh not another sunny day, we need rain.
The fact does not change, but what it means, what it represents, is it good or bad, what is the importance varies wildly.
In climate science there are a about a ka-zillion points of data, all facts, what people do with those facts fluctuates from the interesting to the irrational to moral posturing to the absurd to delusional fantasy.

Reply to  Alx
November 6, 2014 6:36 am

Yes! Humans are subjective. We interpret facts based on our belief system. Ideally, in science, human subjectivity should not be a major factor for interpreting observations. In climate science over the past 2 decades, it’s been the most important factor.
Data continues to pour in. This means we have more information to learn. Scientists views should be converging and coming together towards the truth. Instead, we remain polarized, in some cases with diverging views…….despite billions on research and thousands of studies.
One side accepts information that lines up with their belief system and stores it as knowledge to reinforce their belief system, while discarding information that contradicts their belief system. The other side rejects what the first side accepts and accepts what they reject and for the same psychological, human reason………..because it fits into their belief system.

Sun Spot
Reply to  Alx
November 6, 2014 10:33 am

+100 , yes many like to state facts when really they are just expressing a strong opinion about a piece of data !

Richard of NZ
Reply to  Alx
November 6, 2014 1:22 pm

I would assume that if one is to discuss facts, then they should get the facts correct. The sun does NOT rise every day, but we the observers descend to the sun every day. It is this assumption of belief over actuality that is resulting in so many of the problems that are perceived in the world, many of which may not (or may) exist.

Duster
Reply to  kalya22
November 6, 2014 12:12 pm

You can believe or disbelieve facts. You trip over evidence. Facts are always interpretations of evidence. That is how any jury trial is run. The opposing sides present the evidence and the jury “determines the facts.” The whole “admissablity” discourse is engaged in by prosecution and defense to control jury access to evidence that might alter how they determine the facts.

toorightmate
November 6, 2014 1:52 am

A good precise.
But why waste time on a crock of shit – lewandowsky et al?

Owen in GA
Reply to  toorightmate
November 6, 2014 6:48 am

Information processing theory is a very important endeavor in political psychology. Understanding the process humans use to store and assimilate information is very important. The fact that Dr Lewandowsky seems to be involved in rather main-stream research at the times of these papers makes it all the more disturbing that he would go so far off the rails in Recursive Fury. I think his own political psychology may have gotten the better of him there.

Reply to  Owen in GA
November 6, 2014 9:51 am

I would rephrase that as “…his own worldview” may have…

November 6, 2014 2:16 am

“You might find it difficult to credit but some pseudo-psychologists have, through a simple and inane abuse of the scientific method, “proved” we were clinically insane, Untermensch or some sort of skank end runt that would shame any self-respecting Neanderthal.”
http://thepointman.wordpress.com/2014/11/05/of-squirrels-and-men/
Pointman

Greg
November 6, 2014 3:35 am

…through avoiding their own collision of worldviews Lewandowsky and many of his colleagues simply don’t apply these hard-won findings to the entire social landscape of the climate change domain.

No. Lew and his cronies are fully aware of the game they are playing and that is one of propaganda.
He has not suddenly forgotten all he has learnt and written, he is exploiting it in producing effective propaganda. They know the 97% paper is a crock. They also know that it will be just as effective, even if it is retracted, because of the effects you have detailed from his own work.

knr
Reply to  Greg
November 6, 2014 6:46 am

Why change when such an approach has been so rewarded for him , its clear his working in an area that cares nothing for ethics or good science but is one where ego can never be too big and truth does not even come a poor second to belief .

Duster
Reply to  Greg
November 6, 2014 12:15 pm

Far more likely that Lew is simply as human as the rest of us. He’s not a super villain, just wet up to his belt, with pyramids visible in the distance.

Brute
Reply to  Duster
November 6, 2014 1:05 pm

I myself favor incompetence and ignorance over evil as explanation in most cases, including his. That said, there is also an ugly twist to the man and it will be smart for anyone around him to watch their backs.

Harry Passfield
Reply to  Greg
November 6, 2014 1:25 pm

Greg: I’m convinced that all the 97% papers were politically motivated; even politically ordered. I base this conviction on the way that politicians world-wide are so quick to quote the stat, yet do not reference which paper they are quoting (in case the one they quote has been withdrawn, is about to be withdrawn, or has been shown to be absolute crap).
Bottom line: Lew and Cook were paid – one way or another – to produce the papers.

Tim
November 6, 2014 4:00 am

A. “Sperber (1996, p. 84) states that ‘the ease with which a particular representation can be memorized’ will affect its transmission, and Boyer (1994, 1998) and Atran (2001) emphasize the effects of inductive biases on memory.”
B. “…emotional events are often remembered better than comparable events that are lacking an emotional component (for a review, see Buchanan 2007).”
So that would be:
A. The more that people are exposed to regular propaganda, the more they will accept it.
B. Add a driver like ‘fear’ and memory will be further enhanced.

November 6, 2014 4:02 am

There is an unintentionally funny article by Lew and Pancost in The Conversation today:
https://theconversation.com/are-you-a-poor-logician-logically-you-might-never-know-33355
It starts off with
“If you’re bad at something, you probably also lack the skills to asses [sic] your own performance”
As well as the appropriate typo, it’s ironic to see Lew talking about poor logic and Dunning – Kruger!

Mark from the Midwest
Reply to  Paul Matthews
November 6, 2014 4:48 am

I prefer the more direct version, as: They are too stupid to know how dumb they are

Alx
Reply to  Mark from the Midwest
November 6, 2014 5:28 am

If one can be too stupid to know how dumb they are is true, then one has to smart to understand how dumb they are. Which actually makes sense. Smugness is fertile ground for ignorance, humbleness fertile ground for learning and expanding. Also makes me think of this quote:
“He who knows, knows not. He who knows not, knows” or something along those lines.

Mark from the Midwest
Reply to  Mark from the Midwest
November 6, 2014 7:49 am

My grandfather, who was very smart and successful, told us two things: 1) the sign of a smart person is that they know what you don’t know, 2) if you pay attention you might just learn something … Oh he told us a few other things, but they’re a bit too bawdy for this rather proper group

Reply to  Paul Matthews
November 6, 2014 11:42 am

mebbe the two authors (lew and pancost) need to asses(s) a nearby hole in the ground, and then take a good “introspective” look at themselves.
Will they find much variation?

November 6, 2014 4:05 am

Interesting. So CIE persists even when it is warned against?
Well, there are some obvious reasons for that:
1) Having built a view of the world using that fact it is easier to keep using the fact because our view of the world is not related to “truth” but rather practicality. We use “up” and “down” not because we think the world is flat but because it is relevant to how we live. Having adopted false information and committee to actions – it is relevant to how we live. Being wrong (like a spherical planet) is not enough to reject it. It has to be harmful. At least it has to be irrelevant.
2) Never believe anything until it is officially denied. The tests to eliminate CIE keep emphasising the original fact by saying “Don’t Believe It!” If people keep telling you to ignore the elephant in the room then there is always a suspicion about that tusked thing you thought you saw. Why keep mentioning it unless the elephant is important? The test to eliminate CIE ought to involve natural swamping if false information via time and other facts – not opposition to the false information.
3) If ideas are understood emotionally as well as intellectually then the erasure of ideas must be emotional as well intellectual. Did they test the use of mockery to destroy the CIE? Words can insult ideas but only cruel words hurt.
4) Having spread the misinformation to their close contacts the misinformation will be fed back. This reinforces the misinformation unless all the close contacts are corrected as well. Few people can influence all their friends more than all their friends together can influence them. And there is no resistance initially – only for the correction.

Reply to  M Courtney
November 6, 2014 7:46 am

M Courtney, some excellent thoughts as to why CIE might occur even when information has been withdrawn or corrected. These are good thoughts to keep in mind when working with someone to correct misinformation they are clinging to.
I particularly like the mockery suggestion. 🙂 And it occurs to me that it doesn’t have to be done in a mean or spiteful way, but could be done in a funny, playful way that will bring the emotive content forward and also help sharpen the memory of the corrective information.

Reply to  climatereflections
November 6, 2014 8:27 am

Thanks for the kind reply.
It is curious that online blogs are naturally very strict on making fun of another viewpoint. Mores so than on outright objection that moves the debate into a brock wall.
But that makes sense as it often seems that most flame wars start from a reaction to a jest – not a rejection of a statement.
So I’m not sure how playful mockery can work except with the closest of friends or family.

Boyfromtottenham
November 6, 2014 4:09 am

Got it in one, Greg! Well spotted.

tagerbaek
November 6, 2014 4:27 am

What I’d like to read is a survey and an explanation of the other side’s mentality. They’re clearly the delusional ones.

Reply to  tagerbaek
November 6, 2014 4:35 am

That statement could be use by anybody on any subject.

Don B
November 6, 2014 4:53 am
John West
November 6, 2014 5:18 am

“we find in this series the real reason”
I can’t wait to hear it, but forgive me if I remain skeptical for now.

Reply to  John West
November 6, 2014 3:21 pm

I hope you’ll remain skeptical forever, its a healthy trait 🙂

Alx
November 6, 2014 5:20 am

I do not find it as interesting that the Lewandowsky papers are described as fraudulent and wildly unethical as what drives Lewandowsky to write such papers. In that regard and for advancing the cause of psychology, Lewandowsky would make one heck of study.

Bryan A
November 6, 2014 5:42 am

It is interesting that Lew would write so many papers about conspiracies. It is as if his mind has been twisted into seeing conspiracies and conspirators around every corner,, perhaps lurking in the dark recesses of his mind.

Reply to  Bryan A
November 6, 2014 7:44 am

He is supporting the meme that people who believe in conspiracies are out of touch with reality. This shields those who engage in corruption from public scrutiny. Think of the words you have been conditioned to associate with the word conspiracy. People have an aversion to looking at corruption because of the words the media have attached to it. People can easily be controlled by shame.
It is absurd to imagine that the powerful don’t get together to plan how to maintain and increase their power. This is in fact normal behavior in business.

KNR
Reply to  gyan1
November 6, 2014 11:34 am

. ‘People have an aversion to looking at corruption because of the words the media have attached to it.’
Really how does this come about ? It fact even be that media encourages people to think their corruption even when none exist , because this allegation often makes a good story.

Reply to  gyan1
November 6, 2014 12:28 pm

KNR
November 6, 2014 at 11:34 am -“. ‘People have an aversion to looking at corruption because of the words the media have attached to it.’
Really how does this come about ? It fact even be that media encourages people to think their corruption even when none exist , because this allegation often makes a good story.”
You’re taking my comment out of context because I left out the c word (conspir) that puts comments in moderation. When corruption involves conspir it usually isn’t covered in the mainstream media and people have trouble accepting that corruption can involve conspir because they don’t want to be seen as wacko nut jobs. It is a meme that has successfully prevented widespread exposure of collusion’s that have taken place.

MikeN
November 6, 2014 6:40 am

Beyond the fake answers submitted by SkepticalScience folks, Brandon Shollenberger has highlighted the problem with Lew’s paper. You have lots of non-skeptics saying they don’t believe the moon landing was faked. From this Lew reaches a conclusion that skeptics think the moon landing is faked.
Brandon pointed out how this can be applied to any thinking, such as warmists support genocide.

Reply to  MikeN
November 6, 2014 9:57 am

or as Steve McIntyre observed with his wry wit: “population n = 0”

November 6, 2014 7:14 am

Reblogged this on CraigM350 and commented:
A long read but worth it. Quite clearly there is something wrong with the Lew papers. Projection?

AnonyMoose
November 6, 2014 7:19 am

“But given the position of Lewandowsky in the climate debate, plus his attempted use of psychology to paint climate-catastrophe skeptics as ‘deniers’ and way-out conspiracy theorists, this insight is highly ironic to say the least.”
Because he’s indicated that skepticism is a positive trait, it might be easier for him to call people deniers than to admit that they have any positive traits.

Craig Loehle
November 6, 2014 7:34 am

Narcissists have no problem seeing the flaws in others, but clearly these don’t apply to themselves…

Dawtgtomis
November 6, 2014 7:47 am

The story looks similar to the Nazi takeover of Germany to me. Seems like an act of patriotism to be one of the ‘crazies’ spreading truth in the underground.

Dawtgtomis
Reply to  Dawtgtomis
November 6, 2014 8:27 am

Oops, sorry, by “the story’ I was referring to the the history of the perversion of the green movement and politicization through unfounded information about the motives and objectives of other groups. Probably shouldn’t have used the ‘N’ word.

November 6, 2014 7:49 am

“Still a man hears what he wants to hear and disregards the rest” -Paul Simon

Harry Passfield
Reply to  gyan1
November 6, 2014 1:09 pm

“Now from his pocket quick he flashes,
The crayon on the wall he slashes,
Deep upon the advertising,
A single worded poem comprised
Of four letters.”

Paul Simon – Poem on the Underground Wall

November 6, 2014 8:03 am

As long as taken in the lightweight form (worldview likely doesn’t have an overwhelming influence in all cases; there’s evidence that within some topic domains at least, particular worldviews appear to introduce only modest bias), then this position seems both consistent with other literature and also not particularly controversial. Indeed it seems like common sense when considered as a part of the expression below, also from L2012 (and if we take worldview as a subset of ‘assumed truths’):

I disagree. Unless I’m reading this paragraph incorrectly, I think worldview can drastically affect someone’s processing of information.
For example: Someone who is a firm believer in God, would tend to seek out information which supports such a belief, and discount or ignore evidence to the contrary.
Likewise in the CAGW “debate”, on both sides, those who either believe or are skeptical will seek out information to support their belief or skepticism. Thus we end up with “echo chambers”. It won’t really matter which view is correct, the individuals will do their best to promote their worldview and downplay or ignore the opposite. I’m sure I’m guilty of it, and I’ve seen it here time and time again.

Reply to  Jeff Alberts
November 6, 2014 8:16 am

Hi Jeff.
This paragraph doesn’t rule out ‘drastic’ in some, perhaps even many cases. Just to say that it will not be overwhelming in *all* cases.

Reply to  Jeff Alberts
November 6, 2014 8:31 am

Echo chambers – so true. Even here any alternative view is often vehemently attacked as trolling.
But it doesn’t mean that people actually seek echo chambers. Many people analyse why they believe what they do and accept:-
Some parts as faith
Some as tradition – 2nd hand learning
Some as empirical observation
Some as convenient short-hand for a subject that have little interest in researching.
for example…
And knowing how you “know” means you can adapt and adopt different sources of information.

Harry Passfield
Reply to  M Courtney
November 6, 2014 1:05 pm

M Courtney: In actual fact, I quite welcome ‘trolls’ – at least, the intelligent ones who one hopes are receptive to argument. Because that is the reason I welcome them: I get to test my arguments as well; and I don’t mind admitting that I often learn something, if not about CC science, then about debating skills. That said, the only trolling arguments I find really annoying are the ones that are pedantically precise (tautology?) and call for peer-reviewed literature to back up ALL counter-arguments – or are abusive.

Reply to  Jeff Alberts
November 6, 2014 9:10 am

It depends on how much the individual person has incorporated their worldview with their personal identity. Information becomes filtered through the lens of the worldview when it is strongly associated with “who you are”.
Anything that comes after the words I am………. become part of an identity that defines how you see the world. Politics are so polarizing because people’s identities are defined by their affiliation. An attack on their party or ideology becomes a personal attack that must be defended because of this identification. This is why climate science is so hard to get through to those who identify as being liberal/Democratic. They would have to abandon their identity to accept facts.
A measure of identification is how much the ego reacts. I have had interactions with CAGW believers that show they are completely identified with the idea that humans are destroying the planet and that survival depends on action against human emissions. My statements of simple facts are seen as a threat to their survival. It is very hard for people see that their world view is only a reflection of the limited information they have exposed themselves to. They can only know what they know and believe that they are their thoughts.
Those who are able to observe thoughts without identification are able to witness cause and effect for what it is free of the limitations of the conditioned mind.

Steve Reddish
Reply to  gyan1
November 6, 2014 1:17 pm

+10 There seems to be 2 types of people out there: those who asses the validity of a concept by the evidence that supports it, and those who judge a concept by the presenter. The latter reveal themselves by the statement “You disagree with me, ergo you are biased”, when what the former said was “Your statement is not supported by the evidence.”
SR

JT
November 6, 2014 8:11 am

Thanks for the essay, Andy. As a layman in this discourse within psychology I have very little scientific knowledge but I found it easy to follow your reasoning without it being trivial. However, I have two questions that keep coming back to me:
How does Lewandowski define misinformation?
Does it matter in terms of effect on biases such as CIE which information comes first?
Regarding misinformation:
My concern here is that the reasons and intentions for the source to communicate are parameters that influence how we subjectively establish the truthfulness of information. While there are examples of obvious fictional information such as novels and fairy tales that influence our understanding of the World, what is more relevant towards establishment of the ‘truth’ is information that is delivered as being factual. Examples of this would be news reports, educational material, documentaries, scientific literature, and our own first-hand experience.
Whether such factual information is classified as “mis”information has an element of subjectivity in itself. I prefer to use the term countering information to misinformation since misinformation in my ears is a subset that pre-suggests a malign reason for spreading countering information, such as conscious spreading of untruths. While there are many examples of such malign intent of conscious spreading of lies or omission of countering facts known by the source, there are also obviously examples of benign reasons and intentions for spreading countering information perceived by the source as ‘truth’. In any case, making a pre-suggestion of the quality of information sources, without a clear definition of what that quality implies, muddles the line of reasoning why biases occur as it introduces a bias in itself. Hence I find it important to understand what is exactly meant by the term misinformation.
Regarding the order of information and counter information:
are you aware of any studies that investigates the effects on CIE or other bias phenomena of the order in time information and countering information is communicated?
E.g. in the experiment with the alleged volatile material in the closet, later explained to be untrue; if the time order was reversed: e.g. formulated as “The fire investigation found the warehouse closet empty which falsified the circulated information that careless storage of volatile material in the closet was the likely cause of the fire.” or similar, will the later false information still be as influential? Does the amount of time passed between the information and counter information matter to the bias?
In this same example you quote in your text I see no reason that any person in general, without prior knowledge or connection to this event, would disbelieve the first statement that the fire was caused by volatile material in the closet; it seems plausible. Allowing myself to speculate now: since this information comes first we (people in general) thereby create a “norm of truth”, a benchmark, a plausible explanation, in our minds regarding this event upon which we later compare other information. When confronted with the information that the closet in fact was empty we already have a plausible truth norm established and we are forced to shift it to a new position and for some reason this seems to take considerable effort. The time and energy, the logic and emotion invested into accepting an initial benchmark of truth is worth something to us. In other words: we don’t like to accept that we have been wrong.
This “micro scale” example of establishing a truth norm regarding a fictitious warehouse fire with no notable influence on our lives could be extrapolated (again I’m speculating) to larger more influential contexts such as reasons to invade countries, or CAGW, or what we think we know about people close to us, all the way to our World view. We all have individual norms of truth about everything that we are aware of upon which we compare new information and the shift of the norm to a new position is something we generally tend to be reluctant about. And the other way around, the well-known cognitive dissonance effect from having your World view challenged could be extrapolated down to the micro scale, i.e. in the case of the warehouse fire we are confronted with a cognitive dissonance on micro level which makes us tend to stick with the initial truth norm even in an abundance of evidence to the contrary and warnings about biases and trustworthiness of sources.
If the order matters it implies that proponents of countering information to an already established consensus is always at a disadvantage regardless of the truthfulness. However it might have practical implications on how to more effectively communicate countering information on the micro level, fact by fact, i.e. start with the (countering) fact and then say what that fact disproves instead of first presenting the (norm) information and then debunk it.
Or has the time order of communication of information vs countering information been found to be irrelevant to any effect on biases, which would suggest my speculations to be false?

Reply to  JT
November 6, 2014 10:03 am

People are more open to new information than they are to attacks on what they believe so presenting facts are better done in that order.
It is sad that a majority of the population don’t want to be bothered with the facts because it takes too much effort to discern the truth. They just want to be told how it is so they don’t have to spend precious time figuring out what is real.

Reply to  JT
November 6, 2014 3:17 pm

Hi JT,
I’d say that your speculation is largely sound. And I’m most certainly an amateur in this domain too, which is why I made the steps as easy to follow as possible, so everyone else wouldn’t have to wade through the treacle I’ve gotten past so far.
While experiment reveals effects regarding misinformation, deep causes in the brain are still speculation (afaik). Your thought that it’s ‘hard work’ to change one’s established model, certainly echoes stuff I’ve read to date. If you read all the Lew papers in detail, you will see something like this.
Only retrospect can determine that information was misinformation. And controlling the order of events is okay in experiment, but hardly in real life. As you note, much misinformation is only inadvertantly so, rather than deliberately so. BUT… if the ‘information’ was not transmitted with appropriate caveats and health warnings in the first place, or indeed these were *purposefully* left out because it might dilute the message, then this is tantamount to misinformation from the get-go, because bias is *certain* to set in for such circumstances, and the party transmitting the information has removed the best possibility for fighting the CIE when further events / research turn up *new* information that might call the old information into question.
I think one could rightly use the term ‘deliberate’ in this case.

Reply to  andywest2012
November 6, 2014 6:19 pm

I believe if the information is deliberately wrong it’s called disinformation.

Charles Davis
November 6, 2014 8:30 am

If one is looking for evidence of confirmation bias amongst skeptics, the comments to any article in this forum would be an excellent place to start.

KNR
Reply to  Charles Davis
November 6, 2014 11:10 am

confirmation of what , that Lew’ is a poor trick-physiologist who is willing to do what it takes?
The irony of the Moon landing hoax idea is that it was the AGW supporters who most regarded its as a hoax , which given the way some AGW supporters are also 9/11 ‘truthers’ comes has no surprise .
Like the infamous 97%, the paper is poor from top to bottom and would have been failed, not publicised, if an undergraduate had handing it in for an assessment such is the quality of those leading climate ‘science’

Harry Passfield
Reply to  Charles Davis
November 6, 2014 12:57 pm

“Confirmation bias” has two parents. If you think here is an excellent place to see sceptics showing their bias I would suggest that SkS would be a whole lot worse – for warmists showing theirs. But of course, as SkS delete so many sceptic comments the only ones left – that could be used in any meaningful study – would be (probably?) 97% pro-warmist. How’s that for conformation bias?

Charles Davis
Reply to  Harry Passfield
November 6, 2014 3:03 pm

I fully agree with you. I just wish ours were more objective and balanced. It would make it a whole lot easier to refer people to WUWT.

Reply to  Charles Davis
November 6, 2014 1:50 pm

And the comments in all the screaming progtard blogs about how the end of the world is now assured with senator inhofe in charge of the environment committee or whatever… i love the taste of liberal tears.

PiperPaul
November 6, 2014 8:57 am

“The greatest challenge facing mankind is the challenge of distinguishing reality from fantasy, truth from propaganda. Perceiving the truth has always been a challenge to mankind, but in the information age (or as I think of it, the disinformation age) it takes on a special urgency and importance.”
– Michael Crichton, 2003

PiperPaul
November 6, 2014 8:58 am

Ubiquitous and powerful hardware and software have enabled the illusion of competence to become much more common, easy to achieve and dangerous.

November 6, 2014 9:07 am

I will admit that I only got through the first half of this interesting paper. My notes:
1) The CIE example regarding WMD seems a little off base. My memory isn’t of people mindlessly continuing to believe in WMD, but rather ridicule heaped on the Bush administration for the continuing inability to locate the WMD.
2) CIE seems to me to have huge implications in a judicial/court setting. So often one of the lead lawyers will make a statement that is then struck down by the judge, but of course, the jury has heard the words, and based on CIE, will likely still use them in their decision. Maybe this would justify the jury viewing the proceedings via video on a taped delay, and anything struck by the judge should be blanked out.

jim south london
November 6, 2014 11:41 am

Simple question how much heat does one Ton of CO2 actually trap and does Lewandowsky or anybody else even know.

Chip Javert
November 6, 2014 11:52 am

Psychology is famous for just 2 things:
(1) Being generally recognized as a non-rigorous field of study (e.g.: an impressive percentage of foundational and other “experimental” results simply cannot be reproduced);
(2) Desperately wanting to be recognized a rigorous field of study (albeit, without remediating the shoddy methodology).
…ok, maybe 3 things:
(3) Comical use of statistics.

R2Dtoo
November 6, 2014 12:06 pm

World view may be the most important aspect of this discussion. It is world view that is the focus of propaganda/education. Our students have been inundated with “sustainability” (who can argue with apple pie). Their lens for interpreting information is the singular view of mankind harming the earth. This is why countering information often is outright rejected. They “know” that sea level is rising and dangerous – they have been told that a 1000 times. No matter what is used as a factual or rhetorical counter argument, it will be rejected.
We may, however, be missing an option to change perceptions. People tend to believe what they “see”. To me, the pictures of California beaches that show the same shore features as 50 years earlier are powerful counter arguments- without a word spoken. A few billboards without a lot of words?

tadchem
November 6, 2014 12:10 pm

ROFL.
The title of the article clearly says ‘Lew papers’, but Josh’s illustration immediately replaced that in my mind with ‘loo papers.’

Truthseeker
Reply to  tadchem
November 6, 2014 1:42 pm

Glad you finally got the joke …

Reply to  tadchem
November 6, 2014 1:51 pm

+10

Greg Cavanagh
November 6, 2014 3:56 pm

To be perfectly honest; I think spending anything more than 10 minutes on the musings of Stephan Lewandowsky, is a waste of time. Time that could be more usefully spent doing just about anything else.
Well, there’s my “highly emotional defense mechanism”. Yup…

pat
November 6, 2014 5:00 pm

Lew has a convert, tho i can’t find any evidence whatsoever that Kasra ever made a single contribution to a CAGW-sceptic website over the years:
6 Nov: Salon.com: I was once a climate change denier
I’m a scientist now, but the embarrassment lingers. Here’s why I let myself be duped — and how I came to my senses.
Kasra Hassani, The Tyee
I, a scientist with a PhD in microbiology and immunology, was a climate change denier. Wait, let me add, I was an effective climate change denier: I would throw on a cloak of anecdotal evidence, biased one-sided skepticism and declare myself a skeptic…
So what happened to me then? What was the revelation? How did I enter…
The ‘Tear down the conspiracy wall!’ phase
I began to actively pursue knowledge on how to discuss climate change with conspiracy theorists (the ones who believe in conspiracies in principle and therefore are more likely to be climate change deniers) and I realized my strong-held beliefs and stubbornness matched the same criteria as the people I was trying to convince. I was a denier myself…
http://www.salon.com/2014/11/06/i_was_once_a_climate_change_denier_partner/

November 7, 2014 2:29 am

Lewandowsky’s article mentioned by Paul Matthews above, which is at
https://theconversation.com/are-you-a-poor-logician-logically-you-might-never-know-33355
makes a false statement about Anthony Watts. I put up the following comment, which has been removed (comments are now closed):
“The authors mention “the contrarian blogger who is paired with a climate expert in “debating” climate science and who thinks that hot brick buildings contribute to global warming.” and link to a discussion in which Anthony Watts says: “A brick building that’s been out in the summer sun, you stand next to it at night, you can feel the heat radiating off of it. That’s a heat sink effect… Yes, we have some global warming. It’s clear the temperature has gone up in the last 100 years, but what percentage of that is from carbon dioxide and what percentage of that is from the changes in the local and measurement environment?”
From this it is clear that Watts does not think that that hot brick buildings contribute to global warming, and he is not a contrarian, in that he fully accepts the existence of man-made global warming. The claim about him is therefore false on two counts, and the link provided shows it to be false.
Lewandowsky has published three peer reviewed papers on climate sceptics, all three of which contain falsehoods. The first one makes a false claim about the origins of the on-line respondents to a survey, plus a claim in the paper’s title that was based on just ten respondents; the second, which was riddled with errors, insulted a number of named people, including Anthony Watts, and was retracted; the third, another on-line survey, but of the general public, claimed to confirm the results of the first, when its findings couldn’t have been more different.
When these errors and false claims are pointed out, Lewandowsky just ignores them, refusing any dialogue with his critics. What’s he doing publishing at a site called the Conversation?”