John Cook's new survey – lots of questions, no answers

I and (according to Cook) 50 other blogs (with a supposed 50/50 skeptic to advocate split) have received this invitation:

Hi Anthony

As one of the more highly trafficked climate blogs on the web, I’m seeking your assistance in conducting a crowd-sourced online survey of peer-reviewed climate research. I have compiled a database of around 12,000 papers listed in the ‘Web Of Science’ between 1991 to 2011 matching the topic ‘global warming’ or ‘global climate change’. I am now inviting readers from a diverse range of climate blogs to peruse the abstracts of these climate papers with the purpose of estimating the level of consensus in the literature regarding the proposition that humans are causing global warming. If you’re interested in having your readers participate in this survey, please post the following link to the survey:

[redacted for the moment]

The survey involves rating 10 randomly selected abstracts and is expected to take 15 minutes. Participants may sign up to receive the final results of the survey (de-individuated so no individual’s data will be published). No other personal information is required (and email is optional). Participants may elect to discontinue the survey at any point and results are only recorded if the survey is completed. Participant ratings are confidential and all data will be de-individuated in the final results so no individual ratings will be published.

The analysis is being conducted by the University of Queensland in collaboration with contributing authors of the website Skeptical Science. The research project is headed by John Cook, research fellow in climate communication for the Global Change Institute at the University of Queensland.

This study adheres to the Guidelines of the ethical review process of The University of Queensland. Whilst you are free to discuss your participation in this study with project staff (contactable on +61 7 3365 3553 or j.cook3@uq.edu.au), if you would like to speak to an officer of the University not involved in the study, you may contact the Ethics Officer on +61 7 3365 3924.

If you have any questions about the survey or encounter any technical problems, you can contact me at j.cook3@uq.edu.au

Regards,

John Cook

University of Queensland/Skeptical Science

I asked Cook a series of questions about it, because given his behavior with Lewandowsky, I have serious doubts about the veracity of this survey. I asked to see the ethics approval application and approval from the University, and he declined to do so, saying that it it would compromise the survey by revealing the internal workings. I also asked why each of the 50 emails sent out had a different tracking code on it, and he also declined to explain that for the same reason.  I asked to see the list of 12,000 papers, so that I could see if the database had a true representation of the peer reviewed landscape, and he also declined, but said the list would be posted “very soon”.

I had concerns about the tracking codes that were on each email sent out, and I ran some tests on it. I also tested to see if they survey could be run without tracking codes, it cannot and I asked him if he would simply provide a single code for all participants so that there can be no chance of any binning data by skeptic/non skeptic blogs or any preselection of the papers presented based on the code. I said this would truly ensure a double blind. He also declined that request.

He stated that he had an expectation (based on past experience) that no skeptic bloggers would post the survey anyway. So why send it then?

Meanwhile many other bloggers shared their concerns with me. Lucia posted a large list of questions about Cook’s survey methodology here:

http://rankexploits.com/musings/2013/dear-john-i-have-questions/

It is a good list, and Lucia’s concerns are valid.

Brandon Schollenberger writes at Lucia’s in comments about some tests he did:

========================================================

Brandon Shollenberger (Comment #112328)

May 3rd, 2013 at 12:48 am

For those following at home, the issue I wanted to talk to Lucia about is the non-randomness of this survey. I was curious when two people at SkS said they got an abstract which said (in part):

Agaves can benefit from the increases in temperature and atmospheric CO2 levels accompanying global climate change

I got the exact same abstract when I clicked on the link at SkS. I wondered if that meant there were only 10 abstracts being used at all. I then had a disturbing thought. The earlier Lewandowsky survey had different versions sent to different people for publishing. What if they had done that here? What if each site was sent a link to 10 different abstracts?

To test this, I contacted lucia to get the link she was sent. I then was able to find a site which had already posted the survey, and I got a different link from it. It turned out all of them resulted in me getting the same survey. I concluded everyone was simply getting the exact same 10 abstracts.

I was going to post a comment to that effect when lucia told me she did not get the Agave abstract I referred to. That made me take a closer look. What I found is by using proxies, I was able to get a number of different surveys. Moreover, some proxies got the same surveys as others. That suggests the randomization is not actual randomization, but instead, different samples are given based on one’s IP address.

Unfortunately, that’s not the end of the story. I’ve followed the links with my original IP address again, and I now get a different sample. However, each time I follow the link with the same IP address now, I get the same sample. That suggests I was right about IP addresses determining which sample you get, but there’s an additional factor. My first guess would be time, but if that’s the case, it’s a strange implementation of it. It would have to be something like an hourly (or even daily) randomization or some sort of caching, neither of which makes any sense to me.

Anyway, my head hurts from trying to figure out what screwy “randomization” John Cook is using. I know it’s nothing normal, and it certainly isn’t appropriate, but trying to figure out what sort of crazy thing he might have done is… difficult. I have no idea why he wouldn’t just use a standard approach like having time in seconds be a seed value for an RNG that picks 10 unique values each time someone requests a survey from the server.

=============================================================

So it appears non random after all and has what I (and others) consider fatal sampling issues.

If you want to look at the survey, you can go to Cook’s website and take it there, because until there are some answers forthcoming, like Lucia, I won’t be posting the coded link for this blog.

See Cook’s survey link: Participate in a survey measuring consensus in climate research

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
212 Comments
Inline Feedbacks
View all comments
markx
May 3, 2013 11:02 am

1. Meaningless survey – Unless it is a paper disputing the issue, almost every mainstream and peripheral paper even remotely connected to weather or climate now has the obligatory phrase “and it (ie the topic of the paper) will be a lot worse in a hotter climate!!!
2. Why are there so many papers on consensus connected to this climate issue? Never before have we seen the like of this.

HLx
May 3, 2013 11:05 am

“As one of the more highly trafficked climate blogs on the web” …
Lol!! The most trafficked!..

jc
May 3, 2013 11:06 am

These people are disreputable. That is established beyond doubt. You don’t continue to interact with such beings. That is simply foolish.
They are pariahs.

Reg Nelson
May 3, 2013 11:06 am

Steven Mosher says:
May 3, 2013 at 10:46 am
So if spencer thought it was neutral and SKS readers thought it was neutral, but WUWT readers thought it falsified AGW, then a conclsuion can be drawn. NOT ABOUT THE TRUTH OF THE PAPER. NOT ABOUT THE EXISTENCE OR IMPORTANCE OF CONSENSUS. BUT ABOUT WHAT YOUR REACTION IS TO THE ABSTRACT.
———————–
And what value is that? What does measuring some random person’s reaction to these abstracts show? And how can someone reading only an abstract form any kind of educated opinion without seeing the underlying work that claims to support it?
Cook has proven to be unethical and dishonest, and this is another propaganda ploy- THAT’S MY REACTION TO THIS.
And lastly, “ITS NOT ABOUT THE TRUTH OF THE PAPER.” Why isn’t it? Wouldn’t that actually prove something and further our understanding of the “settled science”?
We’re spending billions of dollars and lowering the standard of living of millions of people around the world based on what? Internet surveys?
Madness.

Dale
May 3, 2013 11:06 am

I’d rather have my tonsils extracted through my anus than participate in any survey conducted by Cook-Lewandowsky. You can bet the ranch on it that the end result of any survey conducted by these two charlatans will be that sceptics are crazy.
Once bitten twice shy.

JJ
May 3, 2013 11:11 am

Steven Mosher says:
Its not about the truth of the papers or the truth of the consensus.

Exaclty. It is about telling lies about sceptics.

Mark
May 3, 2013 11:18 am

Based on Cook’s history, I think the conclusion this ‘study’ is hunting for is “Skeptics are incapable of perceiving reality accurately”, in short “Skeptics are nutters”. What I don’t understand is why he still even has it online. His past disingenuous actions have caused people to (quite rationally) examine his request carefully. His coding of the links has been highlighted on several prominent blogs, but not all blogs. There are ample reports of respondents taking the survey multiple times to observe how it works. Given those facts, there is no chance the study can generate useful data. The whole thing is now invalid except as a partial referendum on the experimenter’s reputation.
It has merely demonstrated the well-founded concern that surveys in which Cook is involved may not be legitimate. This is an evidence-based perception. In short, skeptics are skeptical, especially of those who’ve previously demonstrated strong bias on the same topic. Perhaps someone should point out the early failure of this study to the ethics board at the university. One of the jobs of such a board should be to avoid needlessly wasting volunteer’s time by continuing a study that can no longer reach any valid conclusion.

kadaka (KD Knoebel)
May 3, 2013 11:19 am

Why should we make our survey answers available to them, when their aim is to try and find something wrong with them?
(Ref)

May 3, 2013 11:26 am

Re: The Engineer 8:48 am

Endorsement (of AGW theory) – 1 pt
Implicit endorsement – 2 pts
Explicit endorsement – 3 pts
Neutral – 4 pts

What the heck is “Explicit” doing between “Neutral” and “Implicit”
The same goes for the rejection side of the range.
There they go again….. Assuming linearity when there is no reason to believe it applies.

May 3, 2013 11:31 am

Climate ‘science’ is infested with sociology already. No point in adding to it.

dp
May 3, 2013 11:33 am

The bias was built into because of the preselection of searching for “global warming”. In case anyone’s forgotten, global warming is not at all the same as climate change, but is a possibility under the umbrella of climate change. So are a great many other weather phenomena. Like global cooling, for example, and just as likely.

Bob Koss
May 3, 2013 11:34 am

I was able to take the same survey twice in a row.
First time total score 38. They reported 3.8 average for ten questions answered.
Backed out to sks home page, reloaded the link and filled out a 2nd copy of the same questions. Modified a few values and answered “don’t know” for one of them. Total score 46. They reported I answered 9 questions and my average score was again 3.8. They evidently deducted the “don’t know” answer of 8 from my total score, but then still divided by 10 to calculate my average score. They should have come up with 38 / 9 = 4.2. I wonder if this math error is only in the web display or does this spread into the data being saved.
There was no indication they were aware it was the same person submitting a 2nd copy of the same survey. Such as telling me they are rejecting it or replacing my original survey. Very easy to spam. Here are the first and last papers in the two surveys I filled out if anyone cares to compare surveys.
1)International Year Of Planet Earth 9. Geology In The Urban Environment In Canada
10)On The Detection Of Trends In Long-term Correlated Records
Cross posted from Lucia’s place.

Ben of Houston
May 3, 2013 11:36 am

I can provide a guess as for the weird semi-randomization. It’s a neat slip-up catch if there’s a variable passing problem.
What you do is
f(IP) + f(Date) = semi-Random list
The date is included to prevent accidentally selecting by carrier or region. However, if there’s a problem that causes the list to be lost, it can reload the same list of papers. In short, it could very well be a legitimate feature or even a programming cheat istead of a bug or something insidious.

May 3, 2013 11:37 am

In agreement with David L. Hagen 10:34 am
All the questions are good ones. But this survey fails the Nyquist Test.

From George E. Smith 27-Jan-2012 20:16 in “Decimals of Precision…”:
Second and far more important this, like all climate recording regimens, is a sampled data system.
So before you can even begin to do your statistication on the observations, you have to have VALID data, as determined by the Nyquist theorem. You have to take samples that are spaced no further apart than half the wavelength of the highest frequency component in the “signal”.

Cook’s survey purports to “randomly” sample from 12,000 abstracts, on a questionable cardinal scale of 7 rankings (of unknown repeatability, precision, and accuracy), from 50 blogs (mysteriously categorized into at least 2 camps), with sample batches of size 10, from an unknown number of human participants from a domain of hundreds of backgrounds. Come on! How many people must participate, across how many blogs, each on how many abstracts before Cook does not violate the Nyquist Theorem? This is a failure in the experimental design phase. GIGO.
To top it off, to believe Cook, you are to link to the survey, read, understand, and question 10 abstracts, give each one some careful consideration to fit into a 7 point rating system, all in about 15 minutes, or approximately one unbiased, carefully considered abstract evaluation every 70 seconds. Maybe a Google webbot might work at that speed. But thousands of human degreed blog visitors? No. Such a pace can be met only if you know the desired answer ahead of time.
“Beauty is only skin deep, but Ugly goes clear to the bone.”

Henry Galt
May 3, 2013 11:40 am

Let us just use all the peer reviewed papers that provide evidence that mankind’s addition to the carbon fluxes of this planet caused the/some warming since the Industrial Revolution.
Weed out all those papers that show that the warming periods during C20 caused discomfort to (insert inhabitants/plants of chosen field here) and those that ‘prove’ that it did, indeed warm during C20.
That new, shorter list we can deal with easily, as all respondents will be appraising the same abstracts 8)

May 3, 2013 11:42 am

Rasey
“To top it off, to believe Cook, you are to link to the survey, read, understand, and question 10 abstracts, give each one some careful consideration to fit into a 7 point rating system, all in about 15 minutes, or approximately one unbiased, carefully considered abstract evaluation every 70 seconds. Maybe a Google webbot might work at that speed. But thousands of human degreed blog visitors? No. Such a pace can be met only if you know the desired answer ahead of time.”
you can take as long as you want. Tested. They estimate about 15 minutes.
The abstracts I read took at best 10 seconds to read and comprehend each.
Your Theory? falsified . now own it.

May 3, 2013 11:49 am

Anthony, you list “Skeptical Science” in the blog roll as “Unreliable.”
I see no reason to change that rating.
REPLY: I didn’t consider doing so, but everyone deserves the chance to change their status. I offered Cook that chance, he blew it. – Anthony

jim
May 3, 2013 11:55 am

ok so pit together a survey that has all the peramitters you talk about and let everybody particapate.

Peter Miller
May 3, 2013 11:55 am

‘Ethically challenged’ is one way of describing John Cook.
I think this is yet another attempt by the alarmist community to muddy the waters between AGW and CAGW.
The former exists, but is impossible to quantify due to the myriad number of feedbacks and forcings. Whatever its magnitude, it is not significant and really only a mildly interesting phenomenon at worst. As for CAGW, that is an outright hoax without any rationale whatsoever.
Cook is obviously planning some kind of Lewandowsky ‘research study’, but containing an order of magnitude greater amount of BS.

Skiphil
May 3, 2013 11:59 am

OOOOPS, in addition to the various technical and analytic problems cited above, the survey may have a huge problem with non-random assignment of the sets of 10 abstracts:
http://rankexploits.com/musings/2013/dear-john-i-have-questions/#comment-112328
This survey is sinking like the Titanic…..

May 3, 2013 12:05 pm

“And what value is that? What does measuring some random person’s reaction to these abstracts show? And how can someone reading only an abstract form any kind of educated opinion without seeing the underlying work that claims to support it?”
Lets take an example:
The abstract reports : “Ecosystem response to global warming will be complex and varied”
( this is a real example )
Your choices
1 Explicit Endorsement with Quantification: abstract explicitly states that humans are causing more than half of global warming.
2 Explicit Endorsement without Quantification: abstract explicitly states humans are causing global warming or refers to anthropogenic global warming/climate change as a given fact.
3 Implicit Endorsement: abstract implies humans are causing global warming. E.g., research assumes greenhouse gases cause warming without explicitly stating humans are the cause.
4 Neutral: abstract doesn’t address or mention issue of what’s causing global warming.
5 Implicit Rejection: abstract implies humans have had a minimal impact on global warming without saying so explicitly. E.g., proposing a natural mechanism is the main cause of global warming.
6 Explicit Rejection without Quantification: abstract explicitly minimizes or rejects that humans are causing global warming.
7 Explicit Rejection with Quantification: abstract explicitly states that humans are causing less than half of global warming.
Now, lets suppose that the scientists who participated in this rate this paper a 4.
That is, the paper doesnt say what causes GW. The paper doesnt say whether it is real or not.
it just says ‘The response will be varied”
In trolling through the abstracts they offer up I see a bunch of these neutral abstracts.
So whats my point?
The point is this.
Suppose readers at SKS ( AGWers ) all rate this paper a 4. they agree with scientists.
NOT ABOUT THE TRUTH OF THE STATEMENT, but they can read. the abstract is a 4.
The scienists who read it say its a 4, the readers of SKS say its a 4. Its a 4 dammit.
Now have readers of WUWT read the abstract. perhaps they all say the paper is a 3.
Perhaps when you guys read this you see any mention of the words “global warming’ as an implicit endorsement.
Then you have something to write a sociology paper about. Scientists say it was a 4,
SkS readers say it was a 4, but skeptics read this stuff differently and say it was a 3.
They read implcit endorsement where nobody else does..
Again, as with his last paper, Cook will “bracket the truth”. That is, the paper doesnt show there is a consensus, or that the consensus is true, or that the science is true, but what it would aim at is showing how different groups read the science differently.
In short, you cant read. You see confirmation of your position where there is none, and you see implicit meanings where no body else does.
Finally, Cook may not be doing this. but If I had this data I would do exactly this.
its a reading comprehension test. hehe.

May 3, 2013 12:13 pm

Nullius in Verba says:
May 3, 2013 at 10:12 am
Just looking at the experimental design, my guess would be they intend to test the hypothesis that people on different sides of the debate assess evidence differently. The idea would be to show examples of abstracts that they interpret as AGW-supporting and that sceptics have classified as non-supportive. By presenting it as a survey to assess the consensus, it tempts sceptics into shading their judgements in that direction to try to bias the result towards reporting a lower degree of consensus.
##############
precisely.
take 15 minutes and feed them garbage answers. A great big steaming pile of garbage. dont do multiple entries.

jorgekafkazar
May 3, 2013 12:17 pm

lucia liljegren (@lucialiljegren) says: “Owing to a glitch in John’s script, the system periodically displays thousands of titles. PaulM experienced it. I advised him to save source. I’m sure he had left the site by then. But then it happened to me. So, I saved the source. Zipped, the .html file is 7.5 MB. I suspect I have all the abstracts now.”
I suspect you have the results, now, too.

Björn from Sweden
May 3, 2013 12:22 pm

The survey is pointless. What is the point of guessing authors beliefs about global warming based on papers indirectly based upon the notion there will be global warming?
Totally fringe science very far from the core problem of producing a verifiable climate theory.
Pointless and fruitless waste of time.