I and (according to Cook) 50 other blogs (with a supposed 50/50 skeptic to advocate split) have received this invitation:
As one of the more highly trafficked climate blogs on the web, I’m seeking your assistance in conducting a crowd-sourced online survey of peer-reviewed climate research. I have compiled a database of around 12,000 papers listed in the ‘Web Of Science’ between 1991 to 2011 matching the topic ‘global warming’ or ‘global climate change’. I am now inviting readers from a diverse range of climate blogs to peruse the abstracts of these climate papers with the purpose of estimating the level of consensus in the literature regarding the proposition that humans are causing global warming. If you’re interested in having your readers participate in this survey, please post the following link to the survey:
[redacted for the moment]
The survey involves rating 10 randomly selected abstracts and is expected to take 15 minutes. Participants may sign up to receive the final results of the survey (de-individuated so no individual’s data will be published). No other personal information is required (and email is optional). Participants may elect to discontinue the survey at any point and results are only recorded if the survey is completed. Participant ratings are confidential and all data will be de-individuated in the final results so no individual ratings will be published.
The analysis is being conducted by the University of Queensland in collaboration with contributing authors of the website Skeptical Science. The research project is headed by John Cook, research fellow in climate communication for the Global Change Institute at the University of Queensland.
This study adheres to the Guidelines of the ethical review process of The University of Queensland. Whilst you are free to discuss your participation in this study with project staff (contactable on +61 7 3365 3553 or email@example.com), if you would like to speak to an officer of the University not involved in the study, you may contact the Ethics Officer on +61 7 3365 3924.
If you have any questions about the survey or encounter any technical problems, you can contact me at firstname.lastname@example.org
University of Queensland/Skeptical Science
I asked Cook a series of questions about it, because given his behavior with Lewandowsky, I have serious doubts about the veracity of this survey. I asked to see the ethics approval application and approval from the University, and he declined to do so, saying that it it would compromise the survey by revealing the internal workings. I also asked why each of the 50 emails sent out had a different tracking code on it, and he also declined to explain that for the same reason. I asked to see the list of 12,000 papers, so that I could see if the database had a true representation of the peer reviewed landscape, and he also declined, but said the list would be posted “very soon”.
I had concerns about the tracking codes that were on each email sent out, and I ran some tests on it. I also tested to see if they survey could be run without tracking codes, it cannot and I asked him if he would simply provide a single code for all participants so that there can be no chance of any binning data by skeptic/non skeptic blogs or any preselection of the papers presented based on the code. I said this would truly ensure a double blind. He also declined that request.
He stated that he had an expectation (based on past experience) that no skeptic bloggers would post the survey anyway. So why send it then?
Meanwhile many other bloggers shared their concerns with me. Lucia posted a large list of questions about Cook’s survey methodology here:
It is a good list, and Lucia’s concerns are valid.
Brandon Schollenberger writes at Lucia’s in comments about some tests he did:
Brandon Shollenberger (Comment #112328)
May 3rd, 2013 at 12:48 am
For those following at home, the issue I wanted to talk to Lucia about is the non-randomness of this survey. I was curious when two people at SkS said they got an abstract which said (in part):
Agaves can benefit from the increases in temperature and atmospheric CO2 levels accompanying global climate change
I got the exact same abstract when I clicked on the link at SkS. I wondered if that meant there were only 10 abstracts being used at all. I then had a disturbing thought. The earlier Lewandowsky survey had different versions sent to different people for publishing. What if they had done that here? What if each site was sent a link to 10 different abstracts?
To test this, I contacted lucia to get the link she was sent. I then was able to find a site which had already posted the survey, and I got a different link from it. It turned out all of them resulted in me getting the same survey. I concluded everyone was simply getting the exact same 10 abstracts.
I was going to post a comment to that effect when lucia told me she did not get the Agave abstract I referred to. That made me take a closer look. What I found is by using proxies, I was able to get a number of different surveys. Moreover, some proxies got the same surveys as others. That suggests the randomization is not actual randomization, but instead, different samples are given based on one’s IP address.
Unfortunately, that’s not the end of the story. I’ve followed the links with my original IP address again, and I now get a different sample. However, each time I follow the link with the same IP address now, I get the same sample. That suggests I was right about IP addresses determining which sample you get, but there’s an additional factor. My first guess would be time, but if that’s the case, it’s a strange implementation of it. It would have to be something like an hourly (or even daily) randomization or some sort of caching, neither of which makes any sense to me.
Anyway, my head hurts from trying to figure out what screwy “randomization” John Cook is using. I know it’s nothing normal, and it certainly isn’t appropriate, but trying to figure out what sort of crazy thing he might have done is… difficult. I have no idea why he wouldn’t just use a standard approach like having time in seconds be a seed value for an RNG that picks 10 unique values each time someone requests a survey from the server.
So it appears non random after all and has what I (and others) consider fatal sampling issues.
If you want to look at the survey, you can go to Cook’s website and take it there, because until there are some answers forthcoming, like Lucia, I won’t be posting the coded link for this blog.
See Cook’s survey link: Participate in a survey measuring consensus in climate research