Cook's 97% climate consensus paper crumbles upon examination

Bjørn Lomborg writes on his Facebook Page

pinocchio_puppetUgh. Do you remember the “97% consensus”, which even Obama tweeted?

Turns out the authors don’t want to reveal their data.

It has always been a dodgy paper (http://iopscience.iop.org/1748-9326/8/2/024024/article). Virtually everyone I know in the debate would automatically be included in the 97% (including me, but also many, much more skeptical).

The paper looks at 12,000 papers written in the last 25 years (see here, the paper doesn’t actually specify the numbers, http://notalotofpeopleknowthat.wordpress.com/2013/07/12/watch-the-pea/). It ditches about 8,000 papers because they don’t take a position.

They put people who agree into three different bins — 1.6% that explicitly endorse global warming with numbers, 23% that explicitly endorse global warming without numbers and then 74% that “implicitly endorse” because they’re looking at other issues with global warming that must mean they agree with human-caused global warming.

Voila, you got about 97% (actually here 98%, but because the authors haven’t released the numbers themselves, we have to rely on other quantitative assessments).

Notice, that *nobody* said anything about *dangerous* global warming; this meme simply got attached afterwards (by Obama and many others).

Now, Richard Tol has tried to replicate their study and it turns out they have done pretty much everything wrong. And they don’t want to release the data so anyone else can check it. Outrageous.

Read Tol’s letter to the Peter Høj, University of Queensland: “the main finding of the paper is incorrect, invalid and unrepresentative.” (http://www.uq.edu.au/about/vice-chancellor)

It would be hilarious if it wasn’t so sad.

==============================================================

Dear Professor Høj,

I was struck by a recent paper published in Environmental Research Letters with John Cook, a University of Queensland employee, as the lead author. The paper purports to estimate the degree of agreement in the literature on climate change. Consensus is not an argument, of course, but my attention was drawn to the fact that the headline conclusion had no confidence interval, that the main validity test was informal, and that the sample contained a very large number of irrelevant papers while simultaneously omitting many relevant papers.

My interest piqued, I wrote to Mr Cook asking for the underlying data and received 13% of the data by return email. I immediately requested the remainder, but to no avail.

I found that the consensus rate in the data differs from that reported in the paper. Further research showed that, contrary to what is said in the paper, the main validity test in fact invalidates the data. And the sample of papers does not represent the literature. That is, the main finding of the paper is incorrect, invalid and unrepresentative.

Furthermore, the data showed patterns that cannot be explained by either the data gathering process as described in the paper or by chance. This is documented at https://docs.google.com/file/d/0Bz17rNCpfuDNRllTUWlzb0ZJSm8/edit?usp=sharing

I asked Mr Cook again for the data so as to find a coherent explanation of what is wrong with the paper. As that was unsuccessful, also after a plea to Professor Ove Hoegh-Guldberg, the director of Mr Cook’s work place, I contacted Professor Max Lu, deputy vice-chancellor for research, and Professor Daniel Kammen, journal editor. Professors Lu and Kammen succeeded in convincing Mr Cook to release first another 2% and later another 28% of the data.

I also asked for the survey protocol but, violating all codes of practice, none seems to exist. The paper and data do hint at what was really done. There is no trace of a pre-test. Rating training was done during the first part of the survey, rather than prior to the survey. The survey instrument was altered during the survey, and abstracts were added. Scales were modified after the survey was completed. All this introduced inhomogeneities into the data that cannot be controlled for as they are undocumented.

The later data release reveals that what the paper describes as measurement error (in either direction) is in fact measurement bias (in one particular direction). Furthermore, there is drift in measurement over time. This makes a greater nonsense of the paper.

This is documented here http://richardtol.blogspot.co.uk/2013/08/the-consensus-project-update.html and http://richardtol.blogspot.co.uk/2013/08/biases-in-consensus-data.html.

I went back to Professor Lu once again, asking for the remaining 57% of the data. Particularly, I asked for rater IDs and time stamps. Both may help to understand what went wrong.

Only 24 people took the survey. Of those, 12 quickly dropped out, so that the survey essentially relied on just 12 people. The results would be substantially different if only one of the 12 were biased in one way or the other. The paper does not report any test for rater bias, an astonishing oversight by authors and referees. If rater IDs are released, these tests can be done.

Because so few took the survey, these few answered on average more than 4,000 questions. The paper is silent on the average time taken to answer these questions and, more importantly, on the minimum time. Experience has that interviewees find it difficult to stay focused if a questionnaire is overly long. The questionnaire used in this paper may have set a record for length, yet neither the authors nor the referees thought it worthwhile to test for rater fatigue. If time stamps are released, these tests can be done.

Mr Cook, backed by Professor Hoegh-Guldberg and Lu, has blankly refused to release these data, arguing that a data release would violate confidentiality. This reasoning is bogus.

I don’t think confidentiality is relevant. The paper presents the survey as a survey of published abstracts, rather than as a survey of the raters. If these raters are indeed neutral and competent, as claimed by the paper, then tying ratings to raters would not reflect on the raters in any way.

If, on the other hand, this was a survey of the raters’ beliefs and skills, rather than a survey of the abstracts they rated, then Mr Cook is correct that their identity should remain confidential. But this undermines the entire paper: It is no longer a survey of the literature, but rather a survey of Mr Cook and his friends.

If need be, the association of ratings to raters can readily be kept secret by means of a standard confidentiality agreement. I have repeatedly stated that I am willing to sign an agreement that I would not reveal the identity of the raters and that I would not pass on the confidential data to a third party either on purpose or by negligence.

I first contacted Mr Cook on 31 May 2013, requesting data that should have been ready when the paper was submitted for peer review on 18 January 2013. His foot-dragging, condoned by senior university officials, does not reflect well on the University of Queensland’s attitude towards replication and openness. His refusal to release all data may indicate that more could be wrong with the paper.

Therefore, I hereby request, once again, that you release rater IDs and time stamps.

Yours sincerely,

Richard Tol

http://richardtol.blogspot.co.uk/2013/08/open-letter-to-vice-chancellor-of.html

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

73 Comments
Inline Feedbacks
View all comments
RC Saumarez
August 28, 2013 9:52 am

The consensus was obviously rubbish. I’m afraid that no about of debunking will alter the opinion of true believers.
I doubt that any action will be taken about Cook by the University of Queensland.

Bill Marsh
Editor
August 28, 2013 9:56 am

This paper appears to be an example of the new ‘post-normal’ science.

JimS
August 28, 2013 9:56 am

Once that 97% consensus of scientists went out there to the public, there is no turning back, even if Cook came clean and worn sackcloth and poured ashes over his head in full public repentance. Regardless, I admire Mr. Tol’s perseverance in this matter. Truth is always more precious than lies, even though few would hold it.

arthur4563
August 28, 2013 9:56 am

I must again reiterate my objection of assuming that opinions expressed about global warming twenty five years ago can have much current validity.

Rhoda R
August 28, 2013 9:58 am

Actually getting a response may be every bit as interesting as the response itself.

arthur4563
August 28, 2013 9:58 am

To be clearer, if one wants to find current opinions of researchers, reading their opinions expressed twenty fie years ago is not a valid way of doing so.

ZT
August 28, 2013 9:59 am

There’s a university in Queensland?

August 28, 2013 10:01 am

Richard Tol, you are a much more patient man than I … which may also translate into “more productive” as well. [snip -policy violation – Anthony]
Keep the heat on them …
w.

August 28, 2013 10:08 am

Just to state the obvious, there is no use at all for “consensus” in science. Consensus is an evil political concept. (Sorry if duplicate, something went wrong here.)

Resourceguy
August 28, 2013 10:08 am

Maybe there is some good science here after all. A new linkage has been revealed between low research data integrity and poor political leadership. More followup studies are needed.

August 28, 2013 10:12 am

9:58 am
To be clearer, if one wants to find current opinions of researchers, reading their opinions expressed twenty fie years ago is not a valid way of doing so.
It would be an easy approach to stratify the data by 8 year bands to see if there is a statistically significant time-based trend in abstract content.
But let’s face it, whatever is measured is highly conflated with time-based changes in the journal editorial practices of abstracts and editorship of journals who are in business to sell their products. Heck, even the census of journals has changed over time.

Jean Parisot
August 28, 2013 10:13 am

Someone might want to make sure this little issue gets in the hands of the political campaign managers down in Oz, quickly.

Richard D
August 28, 2013 10:15 am

According to Dr. Tol, “John Cook (in a survey of himself and 11 mates) found…”
http://joannenova.com.au/2013/08/richard-tol-half-cooks-data-still-hidden-rest-shows-result-is-incorrect-invalid-unrepresentative/#more-30203
So the survey subject size was tiny. Apparently all of the subjects were connected to Cook’s work group. Wow.

August 28, 2013 10:16 am

I think continued international discussion of this paper gives it more influence. What has been revealed is sufficient to trash it (thanks to Dr. Tol). As we already know, you end up generating sympathy for these guys in the face of continued “badgering” as they like to say. Mann and Gleick wound up getting society medals and awards for stiffing interrogators. Lewandowski got a professorship in the UK and a royal welcome from the Royal Society. IPCC and Al Gore got nobel prizes. Obama got one as a bribe to get him to come to Copenhagen and surrender to the socialists. Watch for it, Cook became a published scientist, a big step up from being a cartoonist!! He is going to get a medal of recognition for his good work.

August 28, 2013 10:16 am

Only 24 people took the survey. Of those, 12 quickly dropped out, so that the survey essentially relied on just 12 people.
So that’s 97% of the interpretations of just 12 people.
Just 12.
And were these dozen an unbiased jury?
Well, one of them was the author, John Cook, himself.
Not only is he clearly biased (see his website SkS) but he also doesn’t get the point of double-blind trials.

August 28, 2013 10:22 am

The first clue that this paper was bogus was the 97% figure. Only in places like N. Korea can you get 97% consensus on anything. Is anyone surprised that Cook, et al, like Mann, et al, refused to provide data for the report? Does the journal have a requirement to provide data?

August 28, 2013 10:27 am

Thanks Richard

Bill
August 28, 2013 10:27 am

Why would they have to be identified by name? Why not rater #1, etc>?

Tommy Roche
August 28, 2013 10:29 am

Ever since the (in)famous Doran Survey, this statement that “97% of Scientists agree” has been thrown about by alarmists when faced with difficult questions, or used by journalists to pad out a scary climate story for which little or no evidence existed. There is no doubt in my mind that from the moment John Cook came up this plan, the plan was for re-enforcement of that 97% message. It was ALWAYS going to be the outcome. Anything else just would not do.

kadaka (KD Knoebel)
August 28, 2013 10:30 am

dccowboy said on August 28, 2013 at 9:56 am:

This paper appears to be an example of the new ‘post-normal’ science.

Hopefully it’s something unique unto itself, an example of post-science science.

DGP
August 28, 2013 10:33 am

This reminds me of Einstein’s response to A Hundred authors against Einstein, published in 1931.
“If I were wrong, then one would have been enough!”

August 28, 2013 10:34 am

I checked IOP and they say “We encourage authors to make their data freely available by publishing it alongside their article as supplementary data at no extra cost. ” In other words, no requirement to provide supporting data. The Journal of the American Chemical Society goes into great details in the author guidelines specifying what data must be presented either in the manuscript or as attachments with the submission. I’d say that Mr. Cook’s publication requirements were a tad less rigorous than others.

DaveF
August 28, 2013 10:43 am

ZT Aug 28 9:59am: “There’s a university in Queensland?”
There was a James Cook University in Queensland when Bob Carter was there.

August 28, 2013 10:44 am


The lapse in security at SkS Forum has come to haunt us. Among that material, there is a graph that shows, after 16,000 (out of 27,000) ratings were completed, the 11 most active raters and their scores. From there, it is an easy step to say rater 1 = John Cook, rater 2 = Dana Nuccitella, rater 3 = Rob Honeycutt etc.

August 28, 2013 10:58 am

I add my thanks.

1 2 3
Verified by MonsterInsights