John Cook's new survey – lots of questions, no answers

I and (according to Cook) 50 other blogs (with a supposed 50/50 skeptic to advocate split) have received this invitation:

Hi Anthony

As one of the more highly trafficked climate blogs on the web, I’m seeking your assistance in conducting a crowd-sourced online survey of peer-reviewed climate research. I have compiled a database of around 12,000 papers listed in the ‘Web Of Science’ between 1991 to 2011 matching the topic ‘global warming’ or ‘global climate change’. I am now inviting readers from a diverse range of climate blogs to peruse the abstracts of these climate papers with the purpose of estimating the level of consensus in the literature regarding the proposition that humans are causing global warming. If you’re interested in having your readers participate in this survey, please post the following link to the survey:

[redacted for the moment]

The survey involves rating 10 randomly selected abstracts and is expected to take 15 minutes. Participants may sign up to receive the final results of the survey (de-individuated so no individual’s data will be published). No other personal information is required (and email is optional). Participants may elect to discontinue the survey at any point and results are only recorded if the survey is completed. Participant ratings are confidential and all data will be de-individuated in the final results so no individual ratings will be published.

The analysis is being conducted by the University of Queensland in collaboration with contributing authors of the website Skeptical Science. The research project is headed by John Cook, research fellow in climate communication for the Global Change Institute at the University of Queensland.

This study adheres to the Guidelines of the ethical review process of The University of Queensland. Whilst you are free to discuss your participation in this study with project staff (contactable on +61 7 3365 3553 or j.cook3@uq.edu.au), if you would like to speak to an officer of the University not involved in the study, you may contact the Ethics Officer on +61 7 3365 3924.

If you have any questions about the survey or encounter any technical problems, you can contact me at j.cook3@uq.edu.au

Regards,

John Cook

University of Queensland/Skeptical Science

I asked Cook a series of questions about it, because given his behavior with Lewandowsky, I have serious doubts about the veracity of this survey. I asked to see the ethics approval application and approval from the University, and he declined to do so, saying that it it would compromise the survey by revealing the internal workings. I also asked why each of the 50 emails sent out had a different tracking code on it, and he also declined to explain that for the same reason.  I asked to see the list of 12,000 papers, so that I could see if the database had a true representation of the peer reviewed landscape, and he also declined, but said the list would be posted “very soon”.

I had concerns about the tracking codes that were on each email sent out, and I ran some tests on it. I also tested to see if they survey could be run without tracking codes, it cannot and I asked him if he would simply provide a single code for all participants so that there can be no chance of any binning data by skeptic/non skeptic blogs or any preselection of the papers presented based on the code. I said this would truly ensure a double blind. He also declined that request.

He stated that he had an expectation (based on past experience) that no skeptic bloggers would post the survey anyway. So why send it then?

Meanwhile many other bloggers shared their concerns with me. Lucia posted a large list of questions about Cook’s survey methodology here:

http://rankexploits.com/musings/2013/dear-john-i-have-questions/

It is a good list, and Lucia’s concerns are valid.

Brandon Schollenberger writes at Lucia’s in comments about some tests he did:

========================================================

Brandon Shollenberger (Comment #112328)

May 3rd, 2013 at 12:48 am

For those following at home, the issue I wanted to talk to Lucia about is the non-randomness of this survey. I was curious when two people at SkS said they got an abstract which said (in part):

Agaves can benefit from the increases in temperature and atmospheric CO2 levels accompanying global climate change

I got the exact same abstract when I clicked on the link at SkS. I wondered if that meant there were only 10 abstracts being used at all. I then had a disturbing thought. The earlier Lewandowsky survey had different versions sent to different people for publishing. What if they had done that here? What if each site was sent a link to 10 different abstracts?

To test this, I contacted lucia to get the link she was sent. I then was able to find a site which had already posted the survey, and I got a different link from it. It turned out all of them resulted in me getting the same survey. I concluded everyone was simply getting the exact same 10 abstracts.

I was going to post a comment to that effect when lucia told me she did not get the Agave abstract I referred to. That made me take a closer look. What I found is by using proxies, I was able to get a number of different surveys. Moreover, some proxies got the same surveys as others. That suggests the randomization is not actual randomization, but instead, different samples are given based on one’s IP address.

Unfortunately, that’s not the end of the story. I’ve followed the links with my original IP address again, and I now get a different sample. However, each time I follow the link with the same IP address now, I get the same sample. That suggests I was right about IP addresses determining which sample you get, but there’s an additional factor. My first guess would be time, but if that’s the case, it’s a strange implementation of it. It would have to be something like an hourly (or even daily) randomization or some sort of caching, neither of which makes any sense to me.

Anyway, my head hurts from trying to figure out what screwy “randomization” John Cook is using. I know it’s nothing normal, and it certainly isn’t appropriate, but trying to figure out what sort of crazy thing he might have done is… difficult. I have no idea why he wouldn’t just use a standard approach like having time in seconds be a seed value for an RNG that picks 10 unique values each time someone requests a survey from the server.

=============================================================

So it appears non random after all and has what I (and others) consider fatal sampling issues.

If you want to look at the survey, you can go to Cook’s website and take it there, because until there are some answers forthcoming, like Lucia, I won’t be posting the coded link for this blog.

See Cook’s survey link: Participate in a survey measuring consensus in climate research

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
212 Comments
Inline Feedbacks
View all comments
Jeff F.
May 5, 2013 9:48 am

A group collected 12,000! AGW papers, read them all, and wrote 150 word summaries; all without introducing their own bias’s…incredible.

Brandon Shollenberger
May 5, 2013 12:07 pm

I’m a bit late to the party, but I noticed I got mentioned in this post and in the comments. I highly recommend people not focus on my earlier comments about this survey. I was pretty much fumbling around trying to diagnose a problem I knew existed, and I came up with several wrong ideas.
You’d be much better off looking at the post I just made on the issue.
And not that it really matters, but there is no ‘c’ in my name.

mike
May 5, 2013 6:55 pm

Have some well earned fun with the Cookster’s survey:
http://climateaudit.org/2013/05/05/cooks-survey/

May 5, 2013 7:43 pm

I have now received an email back from John Cook confirming my suspicions that his survey would be subject to skewed abstracts, and that he recognises this as a risk but claims to have a solution:
From: John Cook [mailto:j.cook3@uq.edu.au]
Sent: Sunday, 5 May 2013 9:39 a.m.
To: Ian Wishart
Subject: Re: Why your survey premise is cobblers
Hi Ian,
Thanks for the feedback. We’ve anticipated this possible limitation in the study design and have used an independent method to measure the level of consensus in the “study proper”.
Regards
John
On 04/05/2013, at 10:19 PM, Ian Wishart wrote:
John
I read with interest your plan for a survey of the published climate literature abstracts with a view to finding a consensus on the level of AGW.
Unfortunately, your project is pursuing a false premise – that the abstracts are a true reflection of the actual research data in the studies.
One of the things I quickly became aware of while researching the book Air Con was how nearly every study carried an obligatory tip of the hat to AGW, yet often the data did not necessarily support that conclusion. I realised that a number of good researchers were banging the required phraseology into their papers to keep their masters and grants controllers happy, but they were also letting the data speak for itself in the study proper.
A survey of abstracts will be meaningless.
Regards
Ian Wishart

barry
May 5, 2013 8:44 pm

I did the survey and got a result that was less favourable than the actual authors of the papers.

Of the 10 papers that you rated, your average rating was 3.5 (to put that number into context, 1 represents endorsement of AGW, 7 represents rejection of AGW and 4 represents no position). The average rating of the 10 papers by the authors of the papers was 3.3.

It appears that, if I have a bias, it is to the negative side of endorsing AGW. Or else I’m objective and the abstracts are more neutral than the full papers (which is what the papers’ authors rate). Or the authors of the 12,000 papers are generally biased in one direction.
Now, the actual study gets the authors of the papers to rate their own work – the full paper. I don’t know whether the Cook et al paper rates only the abstract.
The public survey is an addendum to, not a core part of the study.
In the comments at SkS the particpants gave less endorsement of AGW to their 10 abstracts than did the actual authors of the papers cited, same as me. This indicates a lack of expected bias for that readership, which is heartening.
My abstracts contained nothing about Agaves. There were 2 papers that implicitly rejected/minimised AGW. This is more than I would have expected based on researching any and all papers on various topics in google scholar over the last 7 years. Most papers/abstracts are neutral on AGW. A few (less than 10% in my experience) implicitly or explicitly reject/minimize AGW. I’ve read several hundred papers/abstracts in that time, so I don’t know how representative that number would be (most read subjects are: sea level, millennial temp records, climate sensitivity, Arctic sea ice, instrumental record, orbital dynamics and climate, ice ages, carbon cycle, radiative forcing, spectral analysis, global energy budget, hydrological cycle).
FWIW.

Justus
May 6, 2013 1:12 am

Why does this even have to be an email survey? Why can’t the papers be published and people be able to state whether the abstract explicitly/implicitly goes with “The Consensus?” It’s EASY for someone to hold the full list of papers in their pockets and only hash out 10 of them. It’s much harder, however, to actually put everything out in the open, put some protective measures on it as to make sure things don’t get rated multiple times, and THERE!
The greenies are always using the guise of fairness under the typical deceitful tactics

May 6, 2013 2:12 am

“The greenies are always using the guise of fairness under the typical deceitful tactics.” Yes they do just that. Not that they see anything wrong with it. Remember it does not matter whether man made CO2 causes global warming or that it does not. It makes no difference to them. So to them lies are a good thing if it enables them to finalize their agenda. There is no difference between the average watermelon and the National Socialist German Workers’ Party.

May 6, 2013 2:17 am

Sorry Anthony, about my comment about watermelons being like the National Socialist German Workers’ Party. But I am a bit angry today about this latest Cook scam. :-((

May 6, 2013 11:00 am

What does a supposed literature survey have to do with physical reality?

upcountrywater
May 6, 2013 6:00 pm

Obviously a climate science history lesson… Really who cares in historic data that ends in 2011…
Good job for not jumping on that squishy twistoflex of gobbly gook.
It’s all about the Sun, and the next few years are going to tweek the warmers..

Lars P.
May 7, 2013 11:51 am

I pretty much like Jo’s answer to Cook’s request:
http://joannenova.com.au/2013/05/dear-john-you-want-deniers-to-help-you-do-a-fallacious-survey-eh/
She puts down her arguments and concludes:
” It’s anti-science, illogical, run by the wrong man, and you” (Cook) “haven’t been honest.
But if you can fix all that, then send me an email.”

The answer is direct, straight and honest, typically Jo.

May 9, 2013 1:31 pm

Roger Knights says: Here are my comments on a survey by James Powell, posted online in various sites last year, of 13,950 papers dealing with climate change.
Roger, here is a detailed rebuttal of this,
13,950 Meaningless Search Results
http://www.populartechnology.net/2013/04/13950-meaningless-search-results.html
1. The context of how the “search phrases” were used in all the results was never determined.
2. The results are padded by not using the search qualifier “anthropogenic”.
3. The 13,950 results cannot be claimed to be peer-reviewed as the Web of Science does not have a peer-reviewed only filter.
4. It is a strawman argument that skeptics deny or reject there has been a global temperature increase of a fraction of a degree since the end of the little ice age

1 7 8 9