Having had their first paper “Recursive Fury” retracted by the journal that originally published it, these clowns are back with a reboot that has the same sad message: “people who question the veracity of global warming/climate change are nutters”.
What’s funny is that Lew et al don’t seem to realize they are talking about a large percentage of the population who have these questions:
But, that doesn’t stop them from essentially labeling everyone who does not agree with “climate change” as having “conspiracy ideation” mental issues. The paper was published in a B list journal called the “Journal of Social and Political Psychology” which advertises open access. What is interesting is that the recycled Lew paper was not published in the original journal that retracted it, even though the journal made this statement:
In the light of a small number of complaints received following publication of the original research article cited above, Frontiers carried out a detailed investigation of the academic, ethical, and legal aspects of the work. This investigation did not identify any issues with the academic and ethical aspects of the study. It did, however, determine that the legal context is insufficiently clear and therefore Frontiers wishes to retract the published article. The authors understand this decision, while they stand by their article and regret the limitations on academic freedom which can be caused by legal factors.
Yes, they stand by it, but given where the reboot was published “just don’t publish in our journal again” is the real message.
If Lew et al. were looking for nutters, it seems just a look at the Table of Contents from the Journal they published in would be a prime source. Just look at some of the paper titles:
Recurrent Fury: Conspiratorial Discourse in the Blogosphere Triggered by Research on the Role of Conspiracist Ideation in Climate Denial
A growing body of evidence has implicated conspiracist ideation in the rejection of scientific propositions. Internet blogs in particular have become the staging ground for conspiracy theories that challenge the link between HIV and AIDS, the benefits of vaccinations, or the reality of climate change. A recent study involving visitors to climate blogs found that conspiracist ideation was associated with the rejection of climate science and other scientific propositions such as the link between lung cancer and smoking, and between HIV and AIDS. That article stimulated considerable discursive activity in the climate blogosphere—i.e., the numerous blogs dedicated to climate “skepticism”—that was critical of the study. The blogosphere discourse was ideally suited for analysis because its focus was clearly circumscribed, it had a well-defined onset, and it largely discontinued after several months. We identify and classify the hypotheses that questioned the validity of the paper’s conclusions using well-established criteria for conspiracist ideation. In two behavioral studies involving naive participants we show that those criteria and classifications were reconstructed in a blind test. Our findings extend a growing body of literature that has examined the important, but not always constructive, role of the blogosphere in public and scientific discourse.
rejection of science; conspiracist discourse; climate denial; Internet blogs
UPDATE: Barry Woods, who was instrumental in the original retraction of the first Lew paper, adds this in comments:
The complainant were vindicated on a key ethics concern.
Fury, named and labelled real identifiable people. with pathologivcal psychological traits.
Recursive Fury Mark 2, does not.. (nobody is identifiable, so the complaints were right)
I added this comment to Prof Lewandowsky’s blog
Hmmm – table three now has anonymous ID’s… (instead of names)
(thus at least one ethics concern HAS been accepted and addressed)
but as Recursive Fury was the most downloaded paper (Stephan’s own words), which had table 3, with the people actually named…
It isn’t really that anonymous now even now…
Perhaps, now this is published, you should take down the original from here:
I was amused by this though (from the new paper):
“Conversely, a peer-reviewed critique of LOG12 and LGO13 has recently appeared in print (Dixon & Jones, 2015) (accompanied by a rejoinder; Lewandowsky, Gignac, & Oberauer, 2015),which exhibited none of the features of conspiratorial ideation that we report in this article and which involved authors that were not part of the blogosphere examined here. Crucially, such academic discourse, however critical,does not involve the attempt to silence inconvenient voices, which has become an increasingly clearly stated goal of elements of the climate “skeptic” blogosphere.”
ref: “and which involved authors that were not part of the blogosphere examined here”
Jones and Dixon were very much involved in the blogosphere with respect to this paper and are well know climate sceptics (Jones FOI’d the Climate Research Unit,( and eventually won) when they refused to supply data, he did this on basic scientific principle, when Climate Audit was refused CRU’s data. And from the climateate emails, showed how the scientist were discussing how to deal with J Jones and Don Keiller, (having words with their university’s)
Prof J Jones even gets quoted in Mark Steyn’s book, criticizing Michael Mann, Ruth Dixon has a well respected blog, and Jonathan Jones has comments in the blogosphere about LOG12 quite often during the period (Climate Audit and Bishop Hill)
an example recently being this (at Climate Audit)
Prof J Jones:
“From one point of view there are only four things wrong with the original LOG13-blogs paper. Unfortunately those four things are the design of the experiment, the implementation of the data collection, the analysis of the data, and the reporting of the results. As a consequence of this interlinked network of ineptitude it is very difficult to disentangle all the errors from each other.
The LGO13-panel paper, by comparison, is much better. The design is relatively standard: no worse than many papers in the field. The implementation is still very poor (see for example the discussion at our post on satisficing), but it’s not so bad as to render the data completely useless. The analysis is still incorrect, but this time it is possible to tease out how and why it is incorrect, rather than just noting that it’s all a horrible mess. The reporting is still poor, but that doesn’t matter for a reanalysis.
So the original point of our comment was to see what we could say about the analysis of the data from LGO13-panel. Somewhat to our surprise we found that, once we knew what to look for, the same analysis also worked for LOG13-blogs, albeit not so clearly because of the appalling skew in that dataset. We don’t say much about other issues, not because we don’t believe they are important, but simply because it’s best in a comment to pick one important issue, where the argument can be made very clearly, and then run with it.” – Prof Jonathan Jones
Prof Henry Markram (co founder of Frontiers) explains why he retracted recursive Fury)
“The studied subjects were explicitly identified in the paper without their consent. It is well acknowledged and accepted that in order to protect a subject’s rights and avoid a potentially defamatory outcome, one must obtain the subject’s consent if they can be identified in a scientific paper. The mistake was detected after publication, and the authors and Frontiers worked hard together for several months to try to find a solution. In the end, those efforts were not successful. The identity of the subjects could not be protected and the paper had to be retracted. Frontiers then worked closely with the authors on a mutually agreed and measured retraction statement to avoid the retraction itself being misused. From the storm this has created, it would seem we did not succeed.
For Frontiers, publishing the identities of human subjects without consent cannot be justified in a scientific paper. Some have argued that the subjects and their statements were in the public domain and hence it was acceptable to identify them in a scientific paper, but accepting this will set a dangerous precedent. With so much information of each of us in the public domain, think of a situation where scientists use, for example, machine learning to cluster your public statements and attribute to you personality characteristics, and then name you on the cluster and publish it as a scientific fact in a reputable journal. While the subjects and their statements were public, they did not give their consent to a public psychological diagnosis in a scientific study. Science cannot be abused to specifically label and point out individuals in the public domain.” – Markram