I and (according to Cook) 50 other blogs (with a supposed 50/50 skeptic to advocate split) have received this invitation:
Hi Anthony
As one of the more highly trafficked climate blogs on the web, I’m seeking your assistance in conducting a crowd-sourced online survey of peer-reviewed climate research. I have compiled a database of around 12,000 papers listed in the ‘Web Of Science’ between 1991 to 2011 matching the topic ‘global warming’ or ‘global climate change’. I am now inviting readers from a diverse range of climate blogs to peruse the abstracts of these climate papers with the purpose of estimating the level of consensus in the literature regarding the proposition that humans are causing global warming. If you’re interested in having your readers participate in this survey, please post the following link to the survey:
[redacted for the moment]
The survey involves rating 10 randomly selected abstracts and is expected to take 15 minutes. Participants may sign up to receive the final results of the survey (de-individuated so no individual’s data will be published). No other personal information is required (and email is optional). Participants may elect to discontinue the survey at any point and results are only recorded if the survey is completed. Participant ratings are confidential and all data will be de-individuated in the final results so no individual ratings will be published.
The analysis is being conducted by the University of Queensland in collaboration with contributing authors of the website Skeptical Science. The research project is headed by John Cook, research fellow in climate communication for the Global Change Institute at the University of Queensland.
This study adheres to the Guidelines of the ethical review process of The University of Queensland. Whilst you are free to discuss your participation in this study with project staff (contactable on +61 7 3365 3553 or j.cook3@uq.edu.au), if you would like to speak to an officer of the University not involved in the study, you may contact the Ethics Officer on +61 7 3365 3924.
If you have any questions about the survey or encounter any technical problems, you can contact me at j.cook3@uq.edu.au
Regards,
John Cook
University of Queensland/Skeptical Science
I asked Cook a series of questions about it, because given his behavior with Lewandowsky, I have serious doubts about the veracity of this survey. I asked to see the ethics approval application and approval from the University, and he declined to do so, saying that it it would compromise the survey by revealing the internal workings. I also asked why each of the 50 emails sent out had a different tracking code on it, and he also declined to explain that for the same reason. I asked to see the list of 12,000 papers, so that I could see if the database had a true representation of the peer reviewed landscape, and he also declined, but said the list would be posted “very soon”.
I had concerns about the tracking codes that were on each email sent out, and I ran some tests on it. I also tested to see if they survey could be run without tracking codes, it cannot and I asked him if he would simply provide a single code for all participants so that there can be no chance of any binning data by skeptic/non skeptic blogs or any preselection of the papers presented based on the code. I said this would truly ensure a double blind. He also declined that request.
He stated that he had an expectation (based on past experience) that no skeptic bloggers would post the survey anyway. So why send it then?
Meanwhile many other bloggers shared their concerns with me. Lucia posted a large list of questions about Cook’s survey methodology here:
http://rankexploits.com/musings/2013/dear-john-i-have-questions/
It is a good list, and Lucia’s concerns are valid.
Brandon Schollenberger writes at Lucia’s in comments about some tests he did:
========================================================
Brandon Shollenberger (Comment #112328)
May 3rd, 2013 at 12:48 am
For those following at home, the issue I wanted to talk to Lucia about is the non-randomness of this survey. I was curious when two people at SkS said they got an abstract which said (in part):
Agaves can benefit from the increases in temperature and atmospheric CO2 levels accompanying global climate change
I got the exact same abstract when I clicked on the link at SkS. I wondered if that meant there were only 10 abstracts being used at all. I then had a disturbing thought. The earlier Lewandowsky survey had different versions sent to different people for publishing. What if they had done that here? What if each site was sent a link to 10 different abstracts?
To test this, I contacted lucia to get the link she was sent. I then was able to find a site which had already posted the survey, and I got a different link from it. It turned out all of them resulted in me getting the same survey. I concluded everyone was simply getting the exact same 10 abstracts.
I was going to post a comment to that effect when lucia told me she did not get the Agave abstract I referred to. That made me take a closer look. What I found is by using proxies, I was able to get a number of different surveys. Moreover, some proxies got the same surveys as others. That suggests the randomization is not actual randomization, but instead, different samples are given based on one’s IP address.
Unfortunately, that’s not the end of the story. I’ve followed the links with my original IP address again, and I now get a different sample. However, each time I follow the link with the same IP address now, I get the same sample. That suggests I was right about IP addresses determining which sample you get, but there’s an additional factor. My first guess would be time, but if that’s the case, it’s a strange implementation of it. It would have to be something like an hourly (or even daily) randomization or some sort of caching, neither of which makes any sense to me.
Anyway, my head hurts from trying to figure out what screwy “randomization” John Cook is using. I know it’s nothing normal, and it certainly isn’t appropriate, but trying to figure out what sort of crazy thing he might have done is… difficult. I have no idea why he wouldn’t just use a standard approach like having time in seconds be a seed value for an RNG that picks 10 unique values each time someone requests a survey from the server.
=============================================================
So it appears non random after all and has what I (and others) consider fatal sampling issues.
If you want to look at the survey, you can go to Cook’s website and take it there, because until there are some answers forthcoming, like Lucia, I won’t be posting the coded link for this blog.
See Cook’s survey link: Participate in a survey measuring consensus in climate research
The consensus of opinion has been wrong on many occasions before and therefore is irrelevant. Only empirical evidence matters.
The results of this survey would probably tell you more about who has been funding research than it does about the consensus of opinion anyway.
Having read the replies above I think we have reached a consensus among WUWT skeptics that CAGW advocates are untrustworthy. To which I add my vote.
12,000 climate science papers. Findings of the survey will be:
X% support AGW.
Y% refute AGW
Z% are neutral
… and 100% of the sceptics are nutters …
… and we can tell who they are because they have failed to recognize that 97% of the abstracts support AGW and only 1% refute AGW while 2% are neutral.
… and we now have their IP addresses.
Only consensus, or no-consensus, type responses are available … None of the available responses include the option that the authors ASSUMED that Global Warming was a given … And how the circular logic around which that assumption being made, allows their papers to be written.
After looking at the first ten abstracts, I believe that this study is designed to demonstrate that a minority segment of the population has conspiratorial ideations concerning belief in the pro-AGW slant of a paper where the author claims no such thing exists. Since they are using a Likert scale, I suspect they will attempt to demonstrate that “balanced responses” correlate well with author self-assessments and “unbalanced responses” do not correlate well and tend to skew toward paranoid delusions of pro-AGW bias when supposedly frank self-assessments tell us otherwise.
In other words, the hypothesis of the study assumes skeptics are nutburgers and the study has been constructed so as to prove this point.
Even if his 12,000 papers are an accurate reflection of the peer reviewed literature, is the peer reviewed literature an accurate reflection of the views of scientists on this subject.
We’ve already discussed the problems skeptics have with getting papers into and out of peer review.
There’s also the problem that since govt money flows almost exclusively to the warmistas, it’s much easier for them to have the time and resources to write and publish papers, which could skew the population of these papers.
Just looking at the experimental design, my guess would be they intend to test the hypothesis that people on different sides of the debate assess evidence differently. The idea would be to show examples of abstracts that they interpret as AGW-supporting and that sceptics have classified as non-supportive. By presenting it as a survey to assess the consensus, it tempts sceptics into shading their judgements in that direction to try to bias the result towards reporting a lower degree of consensus.
It’s an interesting question, and if you got someone a little more neutral to conduct it, (e.g. someone like Dan Kahan, who researches this stuff) it would be a worthwhile enquiry.
I think there are ethical issues in deceiving experimental subjects about their participation – while I can certainly understand that in this area there is a risk of attempts to bias the outcome if you tell people too much, you can’t experiment on people without at least letting them know they’re the subjects of an experiment.
Of course, if somebody was trying to crowdsource an assessment of consensus, you need multiple assessments of the same abstract to avoid exactly that sort of bias. If everyone says the same you can accept it, if people are split, you need to look more closely. It’s hard to say.
Oh of course I’d be glad to help John Cook in whatever way I possibly could. I mean, it’s not like he runs a blog that censors and rewrites comments. Not like he’s ever been involved with running any prior surveys that were questionable or nefarious.
Hmm. But on the other hand, I’ve got this interesting box of thin spagetti in my pantry and I’m thinking it’d be much more productive and rewarding to spend those fifteen minutes counting how many pieces of pasta are in the box. So many engagements, so little time…
I mean, take a John Cook survey? Really???
In light of the egregious ethical and professional failings displayed by John Cook and his survey mentor Stephan Lewandowsky, coupled with the utter absence of any good faith efforts to address the many issues raised to date, the only appropriate response to tis “invitation” would be expressions of contempt.
I would not end the slightest support or approval to anything those clowns are doing. If I come across a survey link I may be tempted to spoof it as an expression of disdain, but otherwise I plan to ignore such survey frauds.
How about a survey ranking the predictive value of these 12,000 papers?
That’s the pertinent question. If the papers are rubbish, who cares if people believe them or not.
This is politics; has nothing to do with science.
Count me out. Crowd sourcing can yield valuable results in some areas (RatherGate comes to mind), but forming consensus or measuring the degree of consensus are not among them.
This “study” will be plagued by all the problems with which professional pollsters (e.g. Gallup) have learned to cope with. The proposal is a kind of polling, an area where Cook is a proven amateur.
Find out what John Cook “really” wants. Attention.
What do narcissists “really” want from others? Attention.
What do whining babies want? Attention.
What do barking dogs want? Attention.
He wants attention because he is lonely. The enviros are all the same, like any small child, or barking dog.
Why do you care what John Cook wants?? Why does anyone even notice John Cook?
I faked the moon landings!
Re: Survey of Peer-Reviewed Scientific Research
(Emphasis added)
This survey is poorly design by using ambiguous terms:
“Global warming” could be natural global warming, minor anthropogenic global warming, or major anthropogenic global warming.
Furthermore, it does not provide for “unknown anthropogenic global warming” where one believes there is scientific basis for more anthropogenic CO2 causes warming, but that current scientific evidence cannot distinguish between natural, minor anthropogenic, or major anthropogenic warming.
This survey is poorly designed by using equivocation
“Global climate change” is used by equivocation to imply major anthropogenic global warming. Yet it can equally mean the global cooling scare of the 1970s or the descent into the Little Ice Age.
Furthermore, “climate change” cannot distinguish between:
those who expect there will be major anthropogenic global warming; from
luke-warmists who expect minor or little global warming; from
those who expect serious global cooling by mid century followed by a return to warming;
from those seeing the beginning of a descent into the next glaciation in about 1500 years.
Professionally, I review scientific and engineering papers.
However I refuse to participate in this survey for its very poor definitions, equivocation, and the prior history of unethical anti-scientific behavior by those conducting it.
Engaging these people has become too draining.
That’s the point of Alinsky tactics. Keep up the torrent of lies and wear the enemy down.
If I was a climate scientist with an agenda I would put a boat-load of pro-warming papers on a web-site and then invite skeptics to evaluate them afterwhich I would laugh at the results for skeptics rejecting such a large body of peer-reviewed scientific research proving conclusively that global warming was real. Otherwise why seek a consensus on a consensus.
Mark Bofill says: “…on the other hand, I’ve got this interesting box of thin spagetti in my pantry and I’m thinking it’d be much more productive and rewarding to spend those fifteen minutes counting how many pieces of pasta are in the box.”
I plan to watch ’em knock down the old Endicott Building, in lieu.
“Reg Nelson says:
May 3, 2013 at 10:26 am
How about a survey ranking the predictive value of these 12,000 papers?
That’s the pertinent question. If the papers are rubbish, who cares if people believe them or not.”
########################################################
If the author thinks it supports AGW and you think it doesnt, then there is an inference that can be drawn regardless of the truth of the paper.
Like So
One of the papers selected is a paper by Spencer: The abstract says “Satellites can be used to measure temperature” the paper says nothing about AGW
So, one wants to compare what Spencer thought about his paper with what readers of SkS think versus readers of WUWT think.
ITS NOT ABOUT THE TRUTH OF THE PAPER.
So if spencer thought it was neutral and SKS readers thought it was neutral, but WUWT readers thought it falsified AGW, then a conclsuion can be drawn. NOT ABOUT THE TRUTH OF THE PAPER. NOT ABOUT THE EXISTENCE OR IMPORTANCE OF CONSENSUS. BUT ABOUT WHAT YOUR REACTION IS TO THE ABSTRACT.
Its not about the Truth of the paper.
Its not about the strength of the consensus
Its about the publics perception of consensus versus the scientists perception of consensus.
So the TRUTH behind that consensus is NOT AT ISSUE. its your attitude that would be studied.
If the results come out that the public thinks there is more consensus than scientists think, then consensus has been ‘oversold’ if skeptics think there is less consensus than scientists themselves think, then you get to explain that.
You will see this happen a few times where a paper is published and AGW folks think the paper supports AGW and Skeptics think it doesnt.
That fact would motivate me as a researcher to look at how people differ in the conclusions they draw from reading an abstract.
Note, after you take the survey they tell you what the authors thought of their own paper.
Now, I have zero knowledge that this is what Cook is up to. This is fun speculation. However, if one had this data, if one had
A) what an author thinks he said
B) what a AGWer thought the author said
C) what a skeptic thought the author said
Then one could do what I describe above. To be clear, I’ve got no knowledge that this is what they are doing, But If I had that data that is what I would do.
Its not about the truth of the papers or the truth of the consensus.
There are so many things wrong with this exercise, that I am reminded of the old line:
“I feel like a mosquito in a nudist camp. I know what I want to do, but don’t know where to begin.”
Leaving aside all of the sampling errors, which no doubt more erudite readers than me will pick up on, let’s go back to taws – an Australian expression which refers to playing marbles. It means going back to the baseline.
Thanks to WUWT, I have read a lot more abstracts than I would have otherwise. Still, for the most part they are often abstruse and sometimes indecipherable, Not to mention when Anthony and others point out that they might not be quite perfect.
Let’s get this straight – there is the original data, then there is the paper, then there is the abstract – and then they ask who knows who what they think about it.
Holy cow.
Isn’t this just more desperation? they cannot produce a genuine survey to support the (false) 97% consensus claims – and the AGW case is getting weaker by the day, so this is just trying to draw people into something that they can further manipulate and produce as ‘science’?
“… It’s too big to be a space station.”
…
“Turn the ship around!”
I had ten abstracts. 4 were studies of the climate with 2 of the 4 that were not pro-AGW. The rest were either about C02 regulations or grant-seekers using AGW as a premise to write something dramatic in their chosen field of study. Scaling up that would make 20% of papers bona fide pro-AGW.
Life is quite short enough. What time I have left is precious.
Owing to a glitch in John’s script, the system periodically displays thousands of titles. PaulM experienced it. I advised him to save source. I’m sure he had left the site by then. But then it happened to me. So, I saved the source. Zipped, the .html file is 7.5 MB. I suspect I have all the abstracts now.
I think we can now contemplate “research”!