More pear-shaped trouble for John Cook's '97% consensus'

Readers may recall the post John Cook’s 97% consensus claim is about to go ‘pear-shaped’. This is an update to that, in two parts. First is the introduction, and the second part is a backstory on Cook’s hidden data, the data he didn’t want to share.

Brandon Shollenberger writes:

Introduction for the Upcoming TCP Release

As you may have heard, I recently came into possession of previously undisclosed material for a 2013 paper by John Cook and others of Skeptical Science. The paper claimed to find the consensus on global warming is 97%.

That number was reached by having a group of people read abstracts (summaries) of ~12,000 scientific papers then say which endorse or reject the consensus. Each abstract was rated twice, and some had a third rater come in as a tie-break. The total number of ratings was 26,848. These ratings were done by 24 people. Twelve of them, combined, contributed only 873 ratings. That means 12 people did approximately 26,000 ratings.

Cook et al. have only discussed results related to the ~27,000 ratings. They have never discussed results broken down by individual raters. They have, in fact, refused to share the data which would allow such a discussion to take place. This is troubling. Biases in individual raters are always a problem when having people analyze text.

Biases can arise because of differences in worldviews, differences in how people understand of the rating system or any number of other things. These biases don’t mean the raters are bad people or even bad raters. It just means their ratings represent different things. If you take no steps to address that, your ratings can wind up looking like:

5-10-pre-reconciliation

This image shows the ratings broken down by individual rater for the Cook et al. paper. The columns go from zero to seven. Zero meant no rating was given. The others were given as:

1 Explicitly endorses and quantifies AGW as 50+%

2 Explicitly endorses but does not quantify or minimise

3 Implicitly endorses AGW without minimising it

4 No Position

5 Implicitly minimizes/rejects AGW

6 Explicitly minimizes/rejects AGW but does not quantify

7 Explicitly minimizes/rejects AGW as less than 50%

The circles in each column are colored according to rater. Their size indicates the number of times the rater selected that endorsement level. Their position on the y-axis represents the percentage of ratings by that rater which fell on that level.

As you can see, these circles do not line up. Some circles are higher than others, meaning those raters were more likely to pick that particular value. Some circles are lower than others, meaning those raters were less likely to pick that particular value. That shows the raters were biased. If they weren’t, the circles would have lined up.

Now then, the authors of the paper did take a step to try to address this issue. When two raters gave different ratings to the same abstract, they were given the opportunity to discuss the disagreement and modify their ratings. This reduced the biases present in the ratings, making the data look like this:

5-10-tease

As you can see, the post-reconcilation data has no zero ratings. It also has fewer biases. Fewer is not none, however. The problem of bias still clearly exists. That problem will necessarily affect the study’s results. The biases of raters’ whose circles are largest will necessarily influence the results more than those of raters’ whose circles are smaller.

To see why this is a problem, remember each circle’s size is dependent largely upon how active a rater was. Had different raters been more active, the larger circles would have been in different locations. That means the combined result would have been in a different location as well.

To demonstrate, I’ve created a simple image. Its layout is the same as the last figure, but it shows the data for the 12 most active raters combined (yellow). It also shows what the combined result would have been if the activity of those 12 raters had been reversed (red):

5-11-test

There are readily identifiable differences given this simple test. That shows the effect of the bias in raters affects the final results. It’s true this particular test resulted in differences favoring the Cook et al results, but that doesn’t mean it’s okay. Bias influencing results isn’t okay, and a different test could have resulted in a different pattern,

Regardless, we now know the results of the Cook et al. paper are influenced by the raters’ individual biases. That’s a problem in and of itself, but it raises a larger question. All the people involved in this study belong to the same group (Skeptical Science). All of these people know each other, talk to one another and have similar overall views related to global warming.

If biases between such a homogenous group can influence their results, what would the results have been if a different group had done the ratings? How would we know which results are right?

Update: It’s worth pointing the paper explicitly said, “Each abstract was categorized by two independent, anonymized raters.” That would have mitigated concerns of bias if true. However, it’s difficult to see how a small group of friends can be considered “independent” of one another. That’s especially true when the group actively talked to one another (on a forum ran by the lead author), even about how to rate specific papers, while the “independent” ratings were going on. This issue was first noted here, and it’s highly relevant when considering issues of bias.

============================================================

Cook et al’s Hidden Data

You might think this post about the previously undisclosed material I recently gained possession of. It’s not. Even with the additional material I now have, there is still data not available to anyone.

You see, while people have talked about rater ID information and timestamps not being available, everybody seems to ignore the fact Cook et al chose not to release data for 521 of the papers they examined.

I bring this up because Dana Nuccitelli, second author of the paper, recently said:

Morph – all the data are available, except confidential bits like author self-ratings. We even created a website where people can attempt to replicate our results. We could not be more transparent.

Actually, they could be much more transparent. Here is the data file they released showing all papers and their ratings. It has 11,944 entries. Here is a concordance showing the ID numbers of the papers they rated. It has 11,944 entries. Here is a data file showing all the ratings done by the group. It has entries for 11,944 papers.

The problem is there were 12,465 papers:

The ISI search generated 12 465 papers. Eliminating papers that were not peer-reviewed (186), not climate-related (288) or without an abstract (47) reduced the analysis to 11 944 papers

Cook et al eliminated 521 papers from their analysis. That’s fine. Filtering out inappropriate data is normal. What’s not normal is hiding that data. People should be allowed to look at what was excluded and why. Authors should not be able to remove ~4% of their data in a way which is completely unverifiable.

But it’s worse than that. The authors didn’t do what their description suggests they did. Their description suggests only 47 papers their search generated had missing abstracts. That’s not true. Over two hundred of the results did not have abstracts. We know this because John Cook said so in his own forum. In a topic titled Tracking down missing abstracts, he said:

Well, we’ve got the ‘no abstracts’ down to 70 which is not too shabby out of 12,272 papers (and starting with over 200 papers without abstracts). I’m guessing a number of those will be opinion/news/commentary rather than peer-reviewed papers.

The 12,272 doesn’t match the 12,465 number because more papers were added later. That doesn’t matter though. What matters is at least 200 of their search results did not have abstracts. The group went out and looked for missing abstracts, inserting ones they found. No documentation of these insertations [sic] has ever been released. The fact their search results were modified has never even been disclosed.

It’s impossible to know which abstracts were added. That means it is impossible to verify the correct abstracts were added without verifying all ~12,000 results. That also means it is impossible to ensure the abstracts added were not a biased sample.

There ‘s more. We’re told 186 papers were excluded for not being peer-reviewed. No explanation is given as to how they determined which papers were and were not peer-reviewed. Comments in the forums show there was no formal methodology. People just investigated results which seemed suspicious to them. There is no way to know how good a job they did of removing non-peer-reviewed material.

And there’s still more. We’re told 288 papers were excluded for not being climate-related. Again, no explanation is given as to how this filter was applied. It does not seem to have been applied well. For example, while 288 papers were excluded for this reason, one of the most active raters said this in the forum:

I have started wondering if there’s some journals missing from our sample or something like that, because I have now rated 1300 papers and I think I have only encountered a few papers that are actually relevant to the issue of AGW. There are lots ond lots of impacts and mitigation papers but I haven’t seen much of papers actually studying global warming itself. This might be something to consider and check after rating phase.

If only “a few” out of 1300 papers were “actually relevant to the issue of AGW,” how is it 12,177 papers out of 12,465 were “climate related”? The only explanation I can find is most papers are “climate related” but not “actually relevant to the issue of AGW.” This is an example. It’s one of the 64 papers placed in the highest category (explicitly claiming humans cause 50%+ of recent warming), and it says:

This work shows that carbon dioxide, which is a main contributor to the global warming effect, could be utilized as a selective oxidant in the oxidative dehydrogenation of ethylbenzene over alumina-supported vanadium oxide catalysts. The modification of the catalytically active vanadium oxide component with appropriate amounts of antimony oxide led to more stable catalytic performance along with a higher styrene yield (76%) at high styrene selectivity (>95%). The improved catalytic behavior was attributable to the enhanced redox properties of the active V-sites.

If you don’t know what any of that means, don’t feel bad. The paper is about a narrow chemistry subject which has no bearing on global warming. It’s only relation to climate is that one clause, “which is a main contributor to the global warming effect.” According to Cook et al, that is apparently enough to make it “climate related.” In fact, that’s enough to make this paper one of the 64 which most strongly support global warming concerns.

Given that, it’s difficult to imagine what papers might have been rated “not climate related.” Fortunately, we don’t have to use our imaginations. While it’s true Cook et al excluded all this data from their data files, it turns out that data is available [via] the search function they built.

Nobody could have guessed that. Nobody who downloaded data files would have thought to go to a website and use a function to find information excluded from those data files. Even if they had, the site only shows the final ratings of those papers. It doesn’t show any intermediary data like that in the data files.

Regardless, it does allow us to check some of the concerns raised in this post. For example, we can do a search to see what sort of papers were considered “not climate related.” I’ll provide the only title in category 1 and part of its abstract:

Now What Do People Know About Global Climate Change? Survey Studies Of Educated Laypeople

When asked how to address the problem of climate change, while respondents in 1992 were unable to differentiate between general “good environmental practices” and actions specific to addressing climate change, respondents in 2009 have begun to appreciate the differences. Despite this, many individuals in 2009 still had incorrect beliefs about climate change, and still did not appear to fully appreciate key facts such as that global warming is primarily due to increased concentrations of carbon dioxide in the atmosphere, and the single most important source of this carbon dioxide is the combustion of fossil fuels.

This abstract is more forceful in its endorsement of global warming concerns than many of the ones labeled “not climate related.” It’s topic, what people know about global warming, is certainly more relevant than topics like the molecular chemistry in material production. I could post example after example showing the same pattern. Papers excluded for not being “climate related” are often far more relevant than papers Cook et al included.

You could never find this out by examining Cook et al’s data files though. Those data files exclude the information necessary to check things like this. It’s only because of an undisclosed difference in their data sets that we could ever hope to check their work on this.

By the way, I encourage everyone to use that search feature to find examples of what I refer to. It’s amazing how many of the papers making up the “consensus” are [ ] “actually relevant to the issue of AGW.”

Advertisements

  Subscribe  
newest oldest most voted
Notify of
myrightpenguin

Gross scientific misconduct. Plain and simple.

rogerknights

one of the most active raters said this in the forum:

I have started wondering if there’s some journals missing from our sample or something like that, because I have now rated 1300 papers and I think I have only encountered a few papers that are actually relevant to the issue of AGW. There are lots ond lots of impacts and mitigation papers but I haven’t seen much of papers actually studying global warming itself. This might be something to consider and check after rating phase.

Why didn’t they do that (exclude irrelevancies) before the rating phase?
I suspect that this same flaw–inclusion of the authors of irrelevant impacts and mitigation papers–led to the 97% result in the other two notable 97%-result surveys.

Keith A. Nonemaker

The “fact” that there is a consensus is the keystone of the alarmist argument. Take that away and the whole house of cards collapses. Expose a charlatan and he will scream ever more loudly that he must be believed. The emperor has no clothes.

I don’t know the answer to this question, and maybe it’s a dumb one. Anyway, here it is: Why hasn’t somebody–anybody–just hired a few of the polling companies like Gallup or Rasmussen or Harris to do a poll of scientists on the CAGW issue? I know those companies usually do political polls, but maybe they would do a poll on this issue as well. Seems to me that this is a more trustworthy and credible way of determining the degree to which the CAGW theory is believed and supported in the scientific community (which I hope is actually very low).
Any ideas out there?

Hahahaha! The vanadium paper is great!
I wrote a little bit about the peer-review process from a chemistry standpoint not long back, and that vanadium paper fits right in. All the global warming BS in the abstract is just decoration … just filler to make it sound more interesting. Nothing more.
Another example is the use of ionic liquids (horrible molten solids, actually) as “green” solvents. You can write a paper about doing reactions in the worst “solvents” imaginable, add the word “green” to it, and get it published pronto. It’s all smoke and mirrors though, and usually the real goal is to score a quckie publication, damn the environment.
http://zombiesymmetry.com/2014/04/08/the-peer-reviewed-scientific-literature-is-mostly-crap/
The peer-review process from an organic chemist’s point of view. 😛

Gary

To have validity for its conclusions, such a study must rigorously select the subject papers on a priori criteria, find raters whose opinions are neutral (as much as possible) toward the subject papers, test and prove the neutrality of raters, and then train them to evaluate using a tested rubric. After the rating is done, tests need to be applied to see if ratings drifted due to fatigue and familiarity. Anything less is an unsound research method that allows too many opportunities for error to creep in.
I sort of admire the dozen raters plowing through thousands of abstracts, even in a cursory manner, but take the results as nothing more than a curious exploration, useful only for doing a more rigorous study.

Shawn in High River

Yes im sure he will retract his position once all the facts are exposed (eye roll)

Evan Jones

From what I can tell, Anthony, the surfacestations paper would be rated a 3. #B^)

Excellent Work Brandon!
” Biases in individual raters are always a problem when having people analyze text.”
rater bias is a problem in many situations beyond textual analysis. In short any time you use a human being as your instrument you have to take certain precautions and you must record certain data. Suppose I am asking a group of raters to rate a photograph of a person or anything according to an objective criteria. Here are the steps one typically goes through.
1. Norming the raters. The researcher will pick exemplars that demonstrate how the rating
system works. raters are all trained using the exemplars.
2. Calibrating the raters. raters are then calibrated against a test set.
3. Rating. Raters are then asked to rate unrated items. you keep track of who rates what
the time they rated it and their rating.
4. Renorming. Over time raters will drift. For example it is no shock that you see so many 4s
one reason for this is that raters tend over time to regress to the norm..on a scale of 1-5
they will tend to call everything a 3. When you look at the ratings over time you can clearly
see this. So, you renorm. that is you put the raters through another norming process.
5. Multiple raters. Many people think that multiple raters solves these problems. It doesnt.
6. Conflict resolution. many people think that having a third person re rate disagreements
solves the problem. It doesnt. you still have to do this but again keeping good records
will allow you to answer charges that your process is biased.
in short in any study and I mean any study that uses humans as a judging instrument, you
must have a written protocal. You must record the important details of the rating process.
who rated what? when did they rate it? how often were they in conflict with other raters
( compute kappa for example ) how were they in conflict? who resolved conflicts, how did
they resolve them. Without this A study that uses humans as a instrument
to make a decision ( does picture X show Y; does text Z indicate P; is sally more pretty than betty; ) is not valid. The issue of rater biased must be addressed and it can only be addressed
with this data.
The other point you raise about data availability is important. certain papers were dropped.
If I am rating a collection of items — say 1200 things– and I decide that 300 of these things
cant be rated.. then I cant merely say they cant be rated I have to present the data and the
method I used to exclude them.

Morph

Glad I asked that question now 😉
But is he right in that all of it is out there or not ? I mean raw and then processed at each stage.

more soylent green!

Wait, the content of the papers wasn’t graded? That is, whether or not the paper offered evidence that supported the consensus wasn’t judged, just the abstracts were read? The quality; of the science wasn't considered? Whether or not the content supported the conclusions presented in the abstracts wasn’t judged?
Talk about junk science!

Alex

Naked as a jaybird. We are living in an absurd world where reality is hidden from society by a fog of selfish apathy/silence among real scientists and partisan bluster among politicians and bureaucrats. I’m afraid that science will suffer greatly as society eventually figures out just how much blood and treasure have been expended on this utter nonsense.

JJ

Meanwhile, this asshat:
http://www.huffingtonpost.com/2014/05/12/john-oliver-climate-change-debate_n_5308822.html
over at HBO is parroting Cook’s results, in an assault on “deniers” that is as demeaning as it is factually incorrect.

Kieth A. Nonemaker:
“The “fact” that there is a consensus is the keystone of the alarmist argument. Take that away and the whole house of cards collapses. Expose a charlatan and he will scream ever more loudly that he must be believed. The emperor has no clothes.”
There will always be a consensus though. The question is, what is it a consensus of? It all depends on how you go about phrasing things.
These guys report a 97% consensus, and that in turn gets batted around the media as a consensus of scientists think the world is about to end. In reality, it’s just a consensus of people who haven’t explicitly said that humans do not contribute to global warming. It’s all smoke and mirrors.
You write a paper, like that vanadium catalyst paper. You want to dress it up, put some color in it, etc. So you toss in a little blurb about global warming. Nobody, and I mean NOBODY, is going to do the opposite … write a vanadium catalyst paper and state in the abstract that global warming is BS. You either mention global warming in a way that is acceptable to the fashion of the times, or you don’t mention it at all. That a paper such as this is one that was considered by Cook’s crew is amusing as hell.

KenB

Dare I say Tree Hut Conspiracy and Idiotation nutitelli Cooked up!! par for the course it seems

KenB

Hah, my last triggered moderation !
[Rescued and posted. ~mod]

cbrtxus

It seems to me that the 97% claim was unrealistic from the start. He should have had his study produce something more like 60%. But that would have conflicted with other studies that had around 97%. I doubt if 97% of scientists even agree on the time of day. Even if the 97% was true, why would that be relevant? If 97% turned out to be wrong and 3% turned out to be correct, would majority rule? That would be strange science. Studies like that are not a substitute for making the case.

richard

bet they didn’t read the abstracts, just went apple- command +f MAN
if it highlighted man – yep man made warming.

richard

command + f
perforMANce.
yep man made warming.

j ferguson

Maybe I’ve not understood what Cook had set out to do, but one would think that papers about mitigation would be based on an assumption that there was something to mitigate and therefore consensus.

Aphan

They are masters of the psychology of propaganda. They know their papers are crap, and that someone will see through it….or even many people. But by then, the TITLE will have been picked up by the media and the damage has been done. They know that no one in the media bothers to actually check the results, and in between papers, they simply attack and demean the people who are onto them so no one will believe them when they say the study is flawed and they have proof.
This has nothing to do with facts or actual science. Its a multi pronged campaign designed to undermine everything solid. They are undermining their own credibility too, but they don’t care if they get what they want in the end.

A conclusion chasing data. That is why the data does not jive.

I think there might be something wrong with me. Everytime I read the phrase “pear shaped”, I think “bootilicious!!!”

Aphan

My first question when exploring the Consensus Tool posted by Cook et all was why…if you want to examine a supposed SPECIFIC consensus on “anthropogenic global warming/climate change”, you omit the word “anthrogopgenic” from the search terms you used to establish your data pool….
I found the answer when I added that specific word, and other similar ones like “man made”, “human caused” etc to my searches in THEIR system. Doing so turns up less than 20 papers that actually address the SPECIFIC type of global warming concensus they set out to verify.

R. de Haan

Yeah, 97% of scientists supporting the AGW consensus…..if you think you live in the former USSR maybe.
Similar propaganda let’s us believe a majority of Europeans is in favor of the EU.

Aphan

WWS-I flash to Carol Channing singing Diamonds are a Girls Best Friend…”but square cut or pear shaped, these rocks won’t lose their shape…” ( I know…not Carol’s song, but her version of it was hilarious and the one I think of first)

rogerknights

CD (@CD153) says:
May 13, 2014 at 8:09 am
I don’t know the answer to this question, and maybe it’s a dumb one. Anyway, here it is: Why hasn’t somebody–anybody–just hired a few of the polling companies like Gallup or Rasmussen or Harris to do a poll of scientists on the CAGW issue? I know those companies usually do political polls, but maybe they would do a poll on this issue as well. Seems to me that this is a more trustworthy and credible way of determining the degree to which the CAGW theory is believed and supported in the scientific community (which I hope is actually very low).
Any ideas out there?

I’ve posted repeatedly here that there should be a survey of scientists in climatology and in the neighboring disciplines asking well-thought-out (sophisticated) questions about many facets of this controversy, similar to those posed by the past surveys of the AMS and AGU (last in 2008) by George Mason Univ. (and executed by the Harris polling organization). This would cut the 97% consensus claim down to size. (Unfortunately, I don’t have the ear of Big Oil, apparently.) Here’s a link to their results:
http://stats.org/stories/2008/global_warming_survey_apr23_08.html

rogerknights

j ferguson says:
May 13, 2014 at 9:13 am
Maybe I’ve not understood what Cook had set out to do, but one would think that papers about mitigation would be based on an assumption that there was something to mitigate and therefore consensus.

But mitigation-authors are not climatologists (specialists in the causes of global warming), so their opinions are not authoritative, they are only assumptions. And who cares what they assume?

D.J. Hawkins

@Steven Mosher says:
May 13, 2014 at 8:16 am
Calm, reasoned, insightful and helpful commentary. No snarkiness.
Who are you, and what have you done with the real Steven Mosher???

Ocham

Forgive me if I am misreading this data, but it looks to me like the MAJORITY of papers have no opinion on Global Warming. If I go to 100 people asking whether it will be warm or cold tomorrow, and 50 of them say “Not Sure”, 40 say warm, and 10 say cold, I haven’t found “consensus” on anything. I have found that a majority of people take no opinion.

mpaul

I’m still boggled by the OCD required here. 2,237 papers reviewed on average by each reviewer? How long would it take to open the file, read the paper, ascertain its POV, record the result and close the file? Let say you can do 10/hour with allowances for fatigue and delay. That means 224 hours — full time effort for 5.6 weeks. These were highly motivated reviewers — on a mission, so to speak. Of course, you can dramatically speed up the process if you are simply scanning the abstract for confirmatory words. Sounds like zealots on a mission to find confirmation. We need time stamps.

planebrad

In the field of Military Studies, I have several colleagues that have gotten papers published by including references to climate change in the abstract after first being rejected. There was never any intention that their theses examine climate as an aspect of their research, but they discovered that with a few minor modifications their research would be in demand. Two papers I can remember offhand: the first examined how the lessons of anti-access/area denial in the Falklands War could apply to a future conflict over Taiwan, while the second studied the development of countermeasures to low-observable technology. Both tied in a paragraph or two into their conclusions about how “climate change” could play a role and then modified the abstracts.
These papers were written using qualitative methods and only a very liberal reviewer would conclude there was any relevance to climate science, but I have to wonder if they made the cut and became part of the consensus. I know the publications these works ended up in had nothing to do with climate, but with the shenanigans that Shollenberger is describing, I wouldn’t be shocked if they were included.

Once you perfect the measurement of the consensus, you will have an estimate of the consensus among scientists who managed to publish in the sampled journals. You will have measured the degree to which the editors have censored information that does not conform to the dogma du jour. You will have evidence of Post Modern Science relying on its tenet of consensus forming. See the discussion between Mann and Jones in their whistle-blown emails.
Consensus is irrelevant to Modern Science. It ranks scientific models as conjectures, hypotheses, theories and laws according to the degree to which they fit the Real World, make predictions of the Real World, and validate them by independent measurements from the Real World.
>>Scientific theories are ways of explaining phenomena and providing insights that can be evaluated by comparison with physical reality. Each successful prediction adds to the weight of evidence supporting the theory, and any unsuccessful prediction demonstrates that the underlying theory is imperfect and requires improvement or abandonment. IPCC, AR4, ¶1.2 The Nature of Earth Science, p. 95.
Go figger! That from the IPCC!! PMS Headquarters. Read on, though, and watch IPCC wander off into PMS:
>>The attributes of science briefly described here can be used in assessing competing assertions about climate change. Can the statement under consideration, in principle, be proven false? [What happened to predictions and success?] Has it been rigorously tested? Did it appear in the peer-reviewed literature? Did it build on the existing research record where appropriate? If the answer to any of these questions is no, then less credence should be given to the assertion until it is tested and independently verified. [Independent?] The IPCC assesses the scientific literature to create a report based on the best available science (Section 1.6). Id.
Pass the Midol.

mpaul:
“That means 224 hours — full time effort for 5.6 weeks. These were highly motivated reviewers — on a mission, so to speak. Of course, you can dramatically speed up the process if you are simply scanning the abstract for confirmatory words. Sounds like zealots on a mission to find confirmation.”
Bingo!
The work, if done appropriately, is unimaginable unless the person doing the work is on some kind of holy mission.I would imagine it went along the lines you suggest … scan for certain phrases, open up the “hits,” do a Cntrl-F to find the sentence, read it, check the box, and move on.

If the count was number of papers, that wrong – it should be on number of authors, otherwise
authors that write multiple papers are given more weight.
The main problem here is the fact that the study was so stupidly designed.
Number one problem – this study is supposed to reflect current scientific opinion but reviews papers written not in the present but in the past.
Number two problem – if you want to know the author’s opinion, well JUST ASK HIM/HER
so there is no problem interpreting the author’s opinion.
Number 3 problem : bias. If an author provides no opinion in his paper about
global warming, it’s likely because he isn’t sure. “Isn’t sure” is an opinion that should
be counted, rathert than tossed out because no opinion is provided.
This study is garbage, inside and out.

asybot

@rogerknights, I read that survey (thanks for the link). I wonder how many have changed their minds since 2008 seeing that in those days they might have feared for their jobs tenure or even grants ?

If you wanted to use the worst graphing method possible, you were highly successful with your pear-shaped method.
I quit reading when I got to [the] graphs, because they convey little information to me, without spending a lot of extra time deciphering.

Billy Liar

Steven Mosher says:
May 13, 2014 at 8:16 am
+1 good comment.

Bob Koss

The motivated group had 23061 ratings, the drudgery group had 607 ratings, and two ratings were made per paper. That should result in 11834 papers, not the 11944 they claim to have rated. That is, unless not all papers received two ratings.
Another minor detail. Their paper says they downloaded the papers in March when their progress graph shows them rating them since early February.

Pethefin

The biases of the raters are unavoidable when top-raters included people like Ari Jokimäki, who based on the information on his own blog (http://agwobserver.wordpress.com) clearly has a mission to “debunk anti-AGW papers”. Such AGW-soldiers can hardly provide scientifically sound ratings of anything when their true mission was to prove the existence of a consensus.

rogerknights

asybot says:
May 13, 2014 at 10:22 am
@rogerknights, I read that survey (thanks for the link). I wonder how many have changed their minds since 2008 seeing that in those days they might have feared for their jobs tenure or even grants ?

Agreed–the Pause must have taken a toll. OTOH, the recent re-endorsements by the official bodies of the AGU and AMS will probably sway some members in the alarmist direction.

Climate science is political science.

pokerguy

None of this careful analysis is going to make the slightest bit of difference. This isn’t to say that it shouldn’t be done of course. But the 97 percent myth is now written in stone. I see some variant of it in almost every alarmist screed I’ve ever read…

Specter

Where can I find the fisking of all three of the “97%” studies? I know I’ve read about all of them here, but have been having an email conversation with my US Senator dealing with their all night chat session a few months back. He likes to throw the 97% at me and I would like to present some cogent facts back in his direction – but time is limited. Thanks.

tjfolkerts

It is worth pointing out that there will be random variations in any such study. Even if the papers simply had the numerals “1” through “7” written on them and the ‘raters’ were simply recording the numbers on the papers they were assigned, some people will get more 3’s and others will get more 4’s. This does not — in and of itself — imply any bias. It would be worth doing a little more statistics to see if there truly were statistically significant differences among the raters.

DrTorch

On your second section, I don’t think either example is a smoking gun. As much as I would have liked to see evidence of scientific misconduct, neither example worked.
“This abstract is more forceful in its endorsement of global warming concerns than many of the ones labeled “not climate related.” It’s topic, what people know about global warming, is certainly more relevant than topics like the molecular chemistry in material production. ”
Is simply wrong. That paper was a poll of laymen’s opinions. It was relevant in subject matter, but did not reflect a scientists’ position on AGW, and was not appropriate for Cook’s survey.
OTOH, the catalyst paper seemed exactly to state scientists’ positions. The reference to AGW was odd, and not relevant, but I would infer that the authors (chemists/scientists) did subscribe to the current CO2 induced AGW theory.

talldave2

Climate scientists are just as good at social science as they are at statistics, geology, meteorology, biology, chemistry, physics and math.

j ferguson

RogerKnights, if the assumption of having a need to mitigate by a non-climatologist doesn’t add to the consensus, then the paper shouldn’t have been in the survey, wouldn’t you think. And if papers by non-climatologists shouldn’t have been in there, how could they ever have reached those numbers in high 20,000s?
Did these guys get anything right?

DrTorch:
“OTOH, the catalyst paper seemed exactly to state scientists’ positions. The reference to AGW was odd, and not relevant, but I would infer that the authors (chemists/scientists) did subscribe to the current CO2 induced AGW theory.”
You know, it probably started out as:
“This work shows that carbon dioxide, could be utilized as a selective oxidant in the oxidative dehydrogenation of ethylbenzene over alumina-supported vanadium oxide catalysts.”
Until the prof or one of his grad. students got the idea to dress it up with the clause “which is a main contributor to the global warming effect.”
This particular example is just awesome! It’s like, you write a paper on the first synthesis of some natural product which may have some therapeutic uses and so you dress up the abstract and introduction will a little cancer talk. “Breast cancer affects millions of women world-wide, and there is a need for better drugs to treat it, blah, blah, blah.” Or, you write some crappy paper about some crappy reaction that nobody cares about, but one of the reagents is “green,” so that’s what you push. It’s hysterical.

D’oh. Reading this, I saw three typos. You guys highlighted one. Another was typing “vai” instead of “via.” The third is the most important though. My third to last paragraph begins with:

This abstract is more forceful in its endorsement of global warming concerns than many of the ones labeled “not climate related.”

The word “not” shouldn’t have been in that sentence.
Apparently I should proofread better. Or get an editor. Either way, I’m going to go fix those mistakes now.
[Done. Though the last sentence actually reads legibly both ways. 8<) Mod]