Readers may recall the post John Cook’s 97% consensus claim is about to go ‘pear-shaped’. This is an update to that, in two parts. First is the introduction, and the second part is a backstory on Cook’s hidden data, the data he didn’t want to share.
Brandon Shollenberger writes:
Introduction for the Upcoming TCP Release
As you may have heard, I recently came into possession of previously undisclosed material for a 2013 paper by John Cook and others of Skeptical Science. The paper claimed to find the consensus on global warming is 97%.
That number was reached by having a group of people read abstracts (summaries) of ~12,000 scientific papers then say which endorse or reject the consensus. Each abstract was rated twice, and some had a third rater come in as a tie-break. The total number of ratings was 26,848. These ratings were done by 24 people. Twelve of them, combined, contributed only 873 ratings. That means 12 people did approximately 26,000 ratings.
Cook et al. have only discussed results related to the ~27,000 ratings. They have never discussed results broken down by individual raters. They have, in fact, refused to share the data which would allow such a discussion to take place. This is troubling. Biases in individual raters are always a problem when having people analyze text.
Biases can arise because of differences in worldviews, differences in how people understand of the rating system or any number of other things. These biases don’t mean the raters are bad people or even bad raters. It just means their ratings represent different things. If you take no steps to address that, your ratings can wind up looking like:
This image shows the ratings broken down by individual rater for the Cook et al. paper. The columns go from zero to seven. Zero meant no rating was given. The others were given as:
1 Explicitly endorses and quantifies AGW as 50+%
2 Explicitly endorses but does not quantify or minimise
3 Implicitly endorses AGW without minimising it
4 No Position
5 Implicitly minimizes/rejects AGW
6 Explicitly minimizes/rejects AGW but does not quantify
7 Explicitly minimizes/rejects AGW as less than 50%
The circles in each column are colored according to rater. Their size indicates the number of times the rater selected that endorsement level. Their position on the y-axis represents the percentage of ratings by that rater which fell on that level.
As you can see, these circles do not line up. Some circles are higher than others, meaning those raters were more likely to pick that particular value. Some circles are lower than others, meaning those raters were less likely to pick that particular value. That shows the raters were biased. If they weren’t, the circles would have lined up.
Now then, the authors of the paper did take a step to try to address this issue. When two raters gave different ratings to the same abstract, they were given the opportunity to discuss the disagreement and modify their ratings. This reduced the biases present in the ratings, making the data look like this:
As you can see, the post-reconcilation data has no zero ratings. It also has fewer biases. Fewer is not none, however. The problem of bias still clearly exists. That problem will necessarily affect the study’s results. The biases of raters’ whose circles are largest will necessarily influence the results more than those of raters’ whose circles are smaller.
To see why this is a problem, remember each circle’s size is dependent largely upon how active a rater was. Had different raters been more active, the larger circles would have been in different locations. That means the combined result would have been in a different location as well.
To demonstrate, I’ve created a simple image. Its layout is the same as the last figure, but it shows the data for the 12 most active raters combined (yellow). It also shows what the combined result would have been if the activity of those 12 raters had been reversed (red):
There are readily identifiable differences given this simple test. That shows the effect of the bias in raters affects the final results. It’s true this particular test resulted in differences favoring the Cook et al results, but that doesn’t mean it’s okay. Bias influencing results isn’t okay, and a different test could have resulted in a different pattern,
Regardless, we now know the results of the Cook et al. paper are influenced by the raters’ individual biases. That’s a problem in and of itself, but it raises a larger question. All the people involved in this study belong to the same group (Skeptical Science). All of these people know each other, talk to one another and have similar overall views related to global warming.
If biases between such a homogenous group can influence their results, what would the results have been if a different group had done the ratings? How would we know which results are right?
Update: It’s worth pointing the paper explicitly said, “Each abstract was categorized by two independent, anonymized raters.” That would have mitigated concerns of bias if true. However, it’s difficult to see how a small group of friends can be considered “independent” of one another. That’s especially true when the group actively talked to one another (on a forum ran by the lead author), even about how to rate specific papers, while the “independent” ratings were going on. This issue was first noted here, and it’s highly relevant when considering issues of bias.
============================================================
Cook et al’s Hidden Data
You might think this post about the previously undisclosed material I recently gained possession of. It’s not. Even with the additional material I now have, there is still data not available to anyone.
You see, while people have talked about rater ID information and timestamps not being available, everybody seems to ignore the fact Cook et al chose not to release data for 521 of the papers they examined.
I bring this up because Dana Nuccitelli, second author of the paper, recently said:
Morph – all the data are available, except confidential bits like author self-ratings. We even created a website where people can attempt to replicate our results. We could not be more transparent.
Actually, they could be much more transparent. Here is the data file they released showing all papers and their ratings. It has 11,944 entries. Here is a concordance showing the ID numbers of the papers they rated. It has 11,944 entries. Here is a data file showing all the ratings done by the group. It has entries for 11,944 papers.
The problem is there were 12,465 papers:
The ISI search generated 12 465 papers. Eliminating papers that were not peer-reviewed (186), not climate-related (288) or without an abstract (47) reduced the analysis to 11 944 papers
Cook et al eliminated 521 papers from their analysis. That’s fine. Filtering out inappropriate data is normal. What’s not normal is hiding that data. People should be allowed to look at what was excluded and why. Authors should not be able to remove ~4% of their data in a way which is completely unverifiable.
But it’s worse than that. The authors didn’t do what their description suggests they did. Their description suggests only 47 papers their search generated had missing abstracts. That’s not true. Over two hundred of the results did not have abstracts. We know this because John Cook said so in his own forum. In a topic titled Tracking down missing abstracts, he said:
Well, we’ve got the ‘no abstracts’ down to 70 which is not too shabby out of 12,272 papers (and starting with over 200 papers without abstracts). I’m guessing a number of those will be opinion/news/commentary rather than peer-reviewed papers.
The 12,272 doesn’t match the 12,465 number because more papers were added later. That doesn’t matter though. What matters is at least 200 of their search results did not have abstracts. The group went out and looked for missing abstracts, inserting ones they found. No documentation of these insertations [sic] has ever been released. The fact their search results were modified has never even been disclosed.
It’s impossible to know which abstracts were added. That means it is impossible to verify the correct abstracts were added without verifying all ~12,000 results. That also means it is impossible to ensure the abstracts added were not a biased sample.
There ‘s more. We’re told 186 papers were excluded for not being peer-reviewed. No explanation is given as to how they determined which papers were and were not peer-reviewed. Comments in the forums show there was no formal methodology. People just investigated results which seemed suspicious to them. There is no way to know how good a job they did of removing non-peer-reviewed material.
And there’s still more. We’re told 288 papers were excluded for not being climate-related. Again, no explanation is given as to how this filter was applied. It does not seem to have been applied well. For example, while 288 papers were excluded for this reason, one of the most active raters said this in the forum:
I have started wondering if there’s some journals missing from our sample or something like that, because I have now rated 1300 papers and I think I have only encountered a few papers that are actually relevant to the issue of AGW. There are lots ond lots of impacts and mitigation papers but I haven’t seen much of papers actually studying global warming itself. This might be something to consider and check after rating phase.
If only “a few” out of 1300 papers were “actually relevant to the issue of AGW,” how is it 12,177 papers out of 12,465 were “climate related”? The only explanation I can find is most papers are “climate related” but not “actually relevant to the issue of AGW.” This is an example. It’s one of the 64 papers placed in the highest category (explicitly claiming humans cause 50%+ of recent warming), and it says:
This work shows that carbon dioxide, which is a main contributor to the global warming effect, could be utilized as a selective oxidant in the oxidative dehydrogenation of ethylbenzene over alumina-supported vanadium oxide catalysts. The modification of the catalytically active vanadium oxide component with appropriate amounts of antimony oxide led to more stable catalytic performance along with a higher styrene yield (76%) at high styrene selectivity (>95%). The improved catalytic behavior was attributable to the enhanced redox properties of the active V-sites.
If you don’t know what any of that means, don’t feel bad. The paper is about a narrow chemistry subject which has no bearing on global warming. It’s only relation to climate is that one clause, “which is a main contributor to the global warming effect.” According to Cook et al, that is apparently enough to make it “climate related.” In fact, that’s enough to make this paper one of the 64 which most strongly support global warming concerns.
Given that, it’s difficult to imagine what papers might have been rated “not climate related.” Fortunately, we don’t have to use our imaginations. While it’s true Cook et al excluded all this data from their data files, it turns out that data is available [via] the search function they built.
Nobody could have guessed that. Nobody who downloaded data files would have thought to go to a website and use a function to find information excluded from those data files. Even if they had, the site only shows the final ratings of those papers. It doesn’t show any intermediary data like that in the data files.
Regardless, it does allow us to check some of the concerns raised in this post. For example, we can do a search to see what sort of papers were considered “not climate related.” I’ll provide the only title in category 1 and part of its abstract:
Now What Do People Know About Global Climate Change? Survey Studies Of Educated Laypeople
When asked how to address the problem of climate change, while respondents in 1992 were unable to differentiate between general “good environmental practices” and actions specific to addressing climate change, respondents in 2009 have begun to appreciate the differences. Despite this, many individuals in 2009 still had incorrect beliefs about climate change, and still did not appear to fully appreciate key facts such as that global warming is primarily due to increased concentrations of carbon dioxide in the atmosphere, and the single most important source of this carbon dioxide is the combustion of fossil fuels.
This abstract is more forceful in its endorsement of global warming concerns than many of the ones labeled “not climate related.” It’s topic, what people know about global warming, is certainly more relevant than topics like the molecular chemistry in material production. I could post example after example showing the same pattern. Papers excluded for not being “climate related” are often far more relevant than papers Cook et al included.
You could never find this out by examining Cook et al’s data files though. Those data files exclude the information necessary to check things like this. It’s only because of an undisclosed difference in their data sets that we could ever hope to check their work on this.
By the way, I encourage everyone to use that search feature to find examples of what I refer to. It’s amazing how many of the papers making up the “consensus” are [ ] “actually relevant to the issue of AGW.”
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



Gross scientific misconduct. Plain and simple.
Why didn’t they do that (exclude irrelevancies) before the rating phase?
I suspect that this same flaw–inclusion of the authors of irrelevant impacts and mitigation papers–led to the 97% result in the other two notable 97%-result surveys.
The “fact” that there is a consensus is the keystone of the alarmist argument. Take that away and the whole house of cards collapses. Expose a charlatan and he will scream ever more loudly that he must be believed. The emperor has no clothes.
I don’t know the answer to this question, and maybe it’s a dumb one. Anyway, here it is: Why hasn’t somebody–anybody–just hired a few of the polling companies like Gallup or Rasmussen or Harris to do a poll of scientists on the CAGW issue? I know those companies usually do political polls, but maybe they would do a poll on this issue as well. Seems to me that this is a more trustworthy and credible way of determining the degree to which the CAGW theory is believed and supported in the scientific community (which I hope is actually very low).
Any ideas out there?
Hahahaha! The vanadium paper is great!
I wrote a little bit about the peer-review process from a chemistry standpoint not long back, and that vanadium paper fits right in. All the global warming BS in the abstract is just decoration … just filler to make it sound more interesting. Nothing more.
Another example is the use of ionic liquids (horrible molten solids, actually) as “green” solvents. You can write a paper about doing reactions in the worst “solvents” imaginable, add the word “green” to it, and get it published pronto. It’s all smoke and mirrors though, and usually the real goal is to score a quckie publication, damn the environment.
http://zombiesymmetry.com/2014/04/08/the-peer-reviewed-scientific-literature-is-mostly-crap/
The peer-review process from an organic chemist’s point of view. 😛
To have validity for its conclusions, such a study must rigorously select the subject papers on a priori criteria, find raters whose opinions are neutral (as much as possible) toward the subject papers, test and prove the neutrality of raters, and then train them to evaluate using a tested rubric. After the rating is done, tests need to be applied to see if ratings drifted due to fatigue and familiarity. Anything less is an unsound research method that allows too many opportunities for error to creep in.
I sort of admire the dozen raters plowing through thousands of abstracts, even in a cursory manner, but take the results as nothing more than a curious exploration, useful only for doing a more rigorous study.
From what I can tell, Anthony, the surfacestations paper would be rated a 3. #B^)
Yes im sure he will retract his position once all the facts are exposed (eye roll)
Excellent Work Brandon!
” Biases in individual raters are always a problem when having people analyze text.”
rater bias is a problem in many situations beyond textual analysis. In short any time you use a human being as your instrument you have to take certain precautions and you must record certain data. Suppose I am asking a group of raters to rate a photograph of a person or anything according to an objective criteria. Here are the steps one typically goes through.
1. Norming the raters. The researcher will pick exemplars that demonstrate how the rating
system works. raters are all trained using the exemplars.
2. Calibrating the raters. raters are then calibrated against a test set.
3. Rating. Raters are then asked to rate unrated items. you keep track of who rates what
the time they rated it and their rating.
4. Renorming. Over time raters will drift. For example it is no shock that you see so many 4s
one reason for this is that raters tend over time to regress to the norm..on a scale of 1-5
they will tend to call everything a 3. When you look at the ratings over time you can clearly
see this. So, you renorm. that is you put the raters through another norming process.
5. Multiple raters. Many people think that multiple raters solves these problems. It doesnt.
6. Conflict resolution. many people think that having a third person re rate disagreements
solves the problem. It doesnt. you still have to do this but again keeping good records
will allow you to answer charges that your process is biased.
in short in any study and I mean any study that uses humans as a judging instrument, you
must have a written protocal. You must record the important details of the rating process.
who rated what? when did they rate it? how often were they in conflict with other raters
( compute kappa for example ) how were they in conflict? who resolved conflicts, how did
they resolve them. Without this A study that uses humans as a instrument
to make a decision ( does picture X show Y; does text Z indicate P; is sally more pretty than betty; ) is not valid. The issue of rater biased must be addressed and it can only be addressed
with this data.
The other point you raise about data availability is important. certain papers were dropped.
If I am rating a collection of items — say 1200 things– and I decide that 300 of these things
cant be rated.. then I cant merely say they cant be rated I have to present the data and the
method I used to exclude them.
Glad I asked that question now 😉
But is he right in that all of it is out there or not ? I mean raw and then processed at each stage.
Wait, the content of the papers wasn’t graded? That is, whether or not the paper offered evidence that supported the consensus wasn’t judged, just the abstracts were read? The quality; of the science wasn't considered? Whether or not the content supported the conclusions presented in the abstracts wasn’t judged?
Talk about junk science!
Naked as a jaybird. We are living in an absurd world where reality is hidden from society by a fog of selfish apathy/silence among real scientists and partisan bluster among politicians and bureaucrats. I’m afraid that science will suffer greatly as society eventually figures out just how much blood and treasure have been expended on this utter nonsense.
Meanwhile, this asshat:
http://www.huffingtonpost.com/2014/05/12/john-oliver-climate-change-debate_n_5308822.html
over at HBO is parroting Cook’s results, in an assault on “deniers” that is as demeaning as it is factually incorrect.
Kieth A. Nonemaker:
“The “fact” that there is a consensus is the keystone of the alarmist argument. Take that away and the whole house of cards collapses. Expose a charlatan and he will scream ever more loudly that he must be believed. The emperor has no clothes.”
There will always be a consensus though. The question is, what is it a consensus of? It all depends on how you go about phrasing things.
These guys report a 97% consensus, and that in turn gets batted around the media as a consensus of scientists think the world is about to end. In reality, it’s just a consensus of people who haven’t explicitly said that humans do not contribute to global warming. It’s all smoke and mirrors.
You write a paper, like that vanadium catalyst paper. You want to dress it up, put some color in it, etc. So you toss in a little blurb about global warming. Nobody, and I mean NOBODY, is going to do the opposite … write a vanadium catalyst paper and state in the abstract that global warming is BS. You either mention global warming in a way that is acceptable to the fashion of the times, or you don’t mention it at all. That a paper such as this is one that was considered by Cook’s crew is amusing as hell.
Dare I say Tree Hut Conspiracy and Idiotation nutitelli Cooked up!! par for the course it seems
Hah, my last triggered moderation !
[Rescued and posted. ~mod]
It seems to me that the 97% claim was unrealistic from the start. He should have had his study produce something more like 60%. But that would have conflicted with other studies that had around 97%. I doubt if 97% of scientists even agree on the time of day. Even if the 97% was true, why would that be relevant? If 97% turned out to be wrong and 3% turned out to be correct, would majority rule? That would be strange science. Studies like that are not a substitute for making the case.
bet they didn’t read the abstracts, just went apple- command +f MAN
if it highlighted man – yep man made warming.
command + f
perforMANce.
yep man made warming.
Maybe I’ve not understood what Cook had set out to do, but one would think that papers about mitigation would be based on an assumption that there was something to mitigate and therefore consensus.
They are masters of the psychology of propaganda. They know their papers are crap, and that someone will see through it….or even many people. But by then, the TITLE will have been picked up by the media and the damage has been done. They know that no one in the media bothers to actually check the results, and in between papers, they simply attack and demean the people who are onto them so no one will believe them when they say the study is flawed and they have proof.
This has nothing to do with facts or actual science. Its a multi pronged campaign designed to undermine everything solid. They are undermining their own credibility too, but they don’t care if they get what they want in the end.
A conclusion chasing data. That is why the data does not jive.
I think there might be something wrong with me. Everytime I read the phrase “pear shaped”, I think “bootilicious!!!”
My first question when exploring the Consensus Tool posted by Cook et all was why…if you want to examine a supposed SPECIFIC consensus on “anthropogenic global warming/climate change”, you omit the word “anthrogopgenic” from the search terms you used to establish your data pool….
I found the answer when I added that specific word, and other similar ones like “man made”, “human caused” etc to my searches in THEIR system. Doing so turns up less than 20 papers that actually address the SPECIFIC type of global warming concensus they set out to verify.
Yeah, 97% of scientists supporting the AGW consensus…..if you think you live in the former USSR maybe.
Similar propaganda let’s us believe a majority of Europeans is in favor of the EU.