Guest post by Alec Rawls
After finalizing a long post on John Cook’s crowd-sourced consensus-rating survey (to be titled “I take Cook’s survey so you don’t have to”), I submitted my completed survey to Cook’s website and received an automated response that included a key new bit of information, suggesting what likely shenanigan Cook has planned.
I am not going to rewrite the post because it describes why I gave the ratings I did to each abstract in the random survey that Cook’s website compiled for me. The likely shenanigan has to do with how the rating rules are applied so I want it to be clear that what I wrote on that subject was innocent of any awareness of how Cook might be biasing the survey. I am just adding this brief introduction.
The new information (new to me) is that Cook seems to be claiming to have in his back pocket a correct answer to what each of the ratings should be. From the automated response:
Of the 10 papers that you rated, your average rating was 3.1 (to put that number into context, 1 represents endorsement of AGW, 7 represents rejection of AGW and 4 represents no position). The average rating of the 10 papers by the authors of the papers was 2.6.
It seems impossible that Cook could actually have gotten large numbers of authors to apply his rating scale to their papers. Maybe this is why he drastically reduced the number of included papers from the 12,000 mentioned in the survey to only those with abstracts shorter than 1000 characters. Maybe the full reduction is to papers that not only have short abstracts but were also self-rated by authors.
Supposing that Cook really does have author-ratings for all the papers used for the survey, there is a major slip between the cup and the lip. The authors are described as rating the papers, while surveyors are asked to rate only the abstracts. This is critical because according to Cook’s rating rules the ratings are supposed to be based on what is or is not specifically mentioned. Obviously full papers discuss a lot more things than abstracts, especially unusually short abstracts. Thus if everyone is applying the rules correctly surveyors ratings should be systematically higher (assessing less conformity with consensus assumptions) than authors’ ratings.
Suppose (as is likely) that survey participants who are referred by skeptic websites rate the abstracts accurately according to the instructions while those who are referred by credulous websites misapply the instructions so as to exaggerate the degree of consensus. This misapplication of the rules will bring the ratings of the consensoids closer to the ratings of the authors than the accurate ratings from the skeptics will be, making the consensoid surveyors look less biased than the skeptic surveyors when they are in fact more biased. Mission accomplished.
My original post is after the jump.
I take Cook’s survey so you don’t have to
John Cook, creator of the pathologically credulous Skeptical Science website, is asking interested parties of all persuasions to participate in a crowd-sourced assessment of the degree of consensus in the peer-reviewed literature on climate change. What percentage of published papers affirm, the “consensus” view that global warming has mostly been caused by human activity? How many deny it, and how many abstain?
This is a question that skeptics and believers are both interested in but for different reasons. Believers use claims of consensus to support their arguments from authority: who are you going to believe, all the scientists, or a few doubters who don’t have sufficient training to properly grasp the issues?
Skeptics see the “consensus” as manufactured by 20-plus years of politically allocated funding and ideological bullying. The science itself is extremely uncertain and rife with contra-indications, turning any high degree of conformity in the peer-reviewed literature into a measure of intellectual corruption. (WUWT ran a post on this the other week.)
Given Cook’s history of shenanigans there is concern that some kind subterfuge is planned, especially since he asked different bloggers to post survey links with different tracking tags without mentioning this in his invitation letter. That seems to have put a “hold” on skeptic participation (WUWT, for one, is not posting its unique survey link unless Cook can give a satisfactory explanation).
I suggest a simple condition for participation: that Cook promise to publish the full data, broken down by abstract and binned by the skeptic vs. believer classifications of the referrers. Cook’s rating system is reasonably unambiguous so where the credulous and the skeptic sides differ in rating a particular abstract it will be easy to see who is being honest.
I don’t think that even the pathologically dishonest “consensus” side will find much to be dishonest about here but the comparison obviously holds no threat for skeptics. Also, the number of credulous and skeptic ratings for each abstract will allow the randomness of the selection of the abstracts to be checked. The data could be further broken down by referring blog but Cook did not advise invited blog authors that this would be done so maybe that might be inappropriate at this point, and it is not necessary for checking randomness.
Hopefully Cook will agree to these terms, as his rating system does have some merit.
Cook does not try to inflate “consensus” numbers by conflating the credulous and the skeptic positions
That was the problem with the bogus Doran and Zimmerman survey that asked respondents whether they think CO2 emissions have a “significant” effect on climate. “Significance” is a bottom-threshold criterion of undefined strength. To the extent that it has any specific meaning it only refers to whether an effect is statistically discernable, at however small a level. Few skeptics (being naturally modest in their claims) would say that CO2 effects cannot even be called “significant,” making Doran’s finding of 82% consensus meaningless. It’s a consensus on a non-issue that skeptics and believers don’t disagree about.
The question is whether CO2 emissions are dangerous, and here Cook’s rating criterion is sound. For danger to even be a possibility CO2 effects have to be larger than natural effects (which are almost certainly headed in the cooling direction now that the sun has dropped into a quiescent phase), and this is what Cook’s rating scale focuses on: to what extent does a given abstract conform to the view that “humans are causing more than half of global warming.”
One can still quibble. By using the present tense (“humans are causing”) Cook presumably means to refer only to the most recent global warming, post-1975 say. The planet cooled for the 30 years before that and pretty much no-one believes that human CO2 emissions had much impact before WWII, but Cook doesn’t let on how short a bout of warming is being used to proclaim a whole new climate regime.
Use of the present tense also presents global warming as an ongoing phenomenon, even though after 15 years of no statistically significant warming no one can avoid the question of warming actually IS ongoing or whether it has stopped. (From Reuters three weeks ago: “Climate scientists struggle to explain warming slowdown.”)
But these biases are actually appropriate. The purpose of the survey is to gauge conformity with the “consensus” position and the consensus, as articulated by the IPCC, is that dangerous global warming is not just an ongoing problem but a rapidly growing problem. Here Cook’s rating criterion—is warming mostly caused by people?—is commendably straightforward, getting at the idea that human effects have outstripped natural effects and we are now on some uncharted run-away course.
The last 15 years of no discernable warming pretty much proves that human effects are not stronger than natural effects, at least not yet, so it has to be swept under the rug. That’s the “consensus” way. Might as well go ahead and measure it.
My survey experience
I wanted to see if Cook’s survey exercise was at all revealing so I gave it a whirl, following the link that he asked me to post on my Error Theory blog.
A copy of the survey I took is posted here. I disabled the “submit” button, just in case it might actually work and mess up Cook’s randomization, and I added the ErrorTheory-tagged survey link to the top of the page, in case anyone really wants to send Cook some skeptic-tagged responses. I don’t see any harm in it. If John tries to misuse the results that just creates pressure for him to release the data (broken down by abstract) which could be amusing.
The rating options (more fully explained on the survey page) are:
1. Explicitly endorses and quantifies AGW as 50+%
2. Explicitly endorses but does not quantify or minimise
3. Implicitly endorses AGW without minimising it
4. Neutral
5. Implicitly minimizes/rejects AGW
6. Explicitly minimizes/rejects AGW but does not quantify
7. Explicitly minimizes/rejects AGW as less than 50%
8. Don’t know
As I detail below, all ten of the abstracts that were selected for me to rate fit clearly within the IPCC paradigm that humans are creating dangerous global warming (a little higher than I would expect on average, but it could be because the abstracts are from an author-rated subset of the literature, which would not be representative and might well tilt to the consensus side).
All but one (maybe two) of the papers proceed from the assumption that big warming is coming, and since nobody on either side of the debate is predicting substantial natural warming, the authors must be implicitly assuming that the predicted warming will be human caused, but this conformism doesn’t show up very well in Cook’s highly specific rating scheme.
Five of the articles take the projections of big human caused warming so much for granted that they don’t bother to mention the causes of warming at all, yielding a Cook-rating of “4” for “neutral” (“doesn’t address or mention [the] issue of what’s causing global warming”). Further, none are explicit in suggesting that people are causing most warming, making for a modest average Cook-consensus-rating of 3.1 (less than one point from “neutral”), despite the extreme degree of conformity exhibited by the papers.
So Cook’s rating scheme does not provide a very good measure of consensus conformity, but it is revealing in a different way. The exercise of classifying the abstracts give a pretty good picture of the various spheres of meaningless toil that $100 billion in climate-science funding has created.
How will an ever-increasing rate of global warming harm x,y and z?
We are all familiar with this category of conformist toiling and it is well represented in my sample survey. (The links that follow are to the abstracts of the named papers).
Why would anyone write a paper on the “Possible Impact Of Global Warming On Cabbage Root Fly (delia-radicum) Activity In The UK,” assessing an obscure possible consequence of several degrees of global warming, if they didn’t believe the IPCC’s projections that human CO2 emissions will indeed cause such warming? Okay, there is another likely reason: because politically funded climate science won’t support anything to do with the natural causes of climate change, or the very real damage that even mild global cooling would inflict. But regardless of whether the authors of the paper are coercees or coercers, they are still part of the “consensus.”
In the same category are: “Predicting Extinctions As A Result Of Climate Change,” (based, of course, on the “consensus” predictions of climate change); and “Effect Of The Solid Content On Biogas Production From Jatropha Curcas Seed Cake.” (Who would focus on horrendously inefficient biofuels if they were not convinced that fossil fuels were killing the planet?)
Another article, “Changes In Tropical Cyclone Number, Duration, And Intensity In A Warming Environment,” finds a lead lining in the decreasing levels of hurricane energy. The number of category 4 and 5 storms has gone up, they say, so we should still be alarmed. (Just don’t tell ’em what’s happened to global major hurricane frequency since their paper came out in 2005, or accumulated cyclone energy.)
As gung-ho as these four sets of authors seem to be for the alarmist side, they don’t say anything about why temperatures went up over the period they analyze, so the abstracts have to be rated “neutral.”
Five of the papers are about computer modeling of climate
Four of these simulate GHG driven warming and should probably be rated a 2 (“Explicit Endorsement without Quantification: abstract explicitly states humans are causing global warming or refers to anthropogenic global warming/climate change as a given fact.”)
One, for instance, predicts, as stated in its title: “Intensified Asian Summer Monsoon And Its Variability In A Coupled Model Forced By Increasing Greenhouse Gas Concentrations.” The model results actually would yield a quantification of the GHG effects on warming and variability, and since these models don’t anticipate secular changes in any forcings other than GHGs, they presumably end up attributing essentially all predicted climate change to rising CO2. Still, the abstract doesn’t actually say anything about current warming being caused more than 50% by CO2 so the paper can’t be rated a 1. Ditto for “Projection of Future Sea Level,” “Dynamic And Thermodynamic Influences On Intensified Daily Rainfall,” and “Changes in Water Vapor Transport.”
None of these model simulations provide any evidence for CO2 driven warming. The models used are all calibrated using the assumption that no other important forcings were at work over the 20th century. In particular, they all assume there is no mechanism of solar forcing beyond the very slight variation in total solar irradiance, which is parameterized in current IPCC models as having only 1/40th the forcing effect of increasing CO2 (AR5 SOD p. 8-39 table 8.7).
Since these models start with the assumption that warming is caused by CO2 they cannot in turn provide evidence for that proposition. That would be a circular argument. The one exception might be if the authors could make a solid case that their models succeed in capturing complex emergent phenomena observed in nature, but this is the opposite of what is actually happening. The models are completely missing on their basic predicted “fingerprints,” like the predicted upper-tropospheric “hotspot” that thermometers can’t find.
The fifth computer model paper (“Scaling in the Atmosphere”) looks at how to achieve higher resolution models. Since all the big taxpayer-funded computer models are driven primarily by CO2 and completely omit any enhanced solar forcing, efforts to refine the resolution of these models is another form of toiling in the “consensus” quarry, yet this paper has to be Cook-rated “neutral” because it doesn’t even mention any predicted warming, never mind its cause.
That’s five “neutral” ratings and four “explicit without quantification” out of nine thoroughly consensus-conforming papers.
The last abstract is the only one that addresses the evidence for what might have caused the modicum of late 20th century warming, and it is the most revealing
By coincidence, this happens to be a paper I wrote about in 2011. It is Knud Lassen’s 1999 update to his seminal 1991 paper with Friis-Christensen. Their original research had found a strong correlation between solar cycle length and global temperature change:
Evidence of a substantial solar effect on climate would soon come to be regarded as “contrarian” and Lassen at least seems to have been eager to get over to the safe side and renounce his earlier work. When the 1997-98 El Nino caused global temperatures to spike dramatically Lassen was hair-trigger on the opportunity to issue a 1999 update, declaring that with the rapid increase in recent temperatures the correlation between solar cycle length and temperature had been broken.
This really was in reaction just to the El Nino spike. There is no way that Lassen could have said in 1996 that the planet had warmed too much to consist with the pattern he and Friss-Christensen had discovered through 1991. Solar cycle 22, which ended in 1996, was the shortest of the modern-maximum era so Lassen should at that point have been expecting temperatures to rise, but take a look at the satellite temperature record through 96. Unless you want to cherry pick end points there is no significant trend going back from 96 to the beginning of the record:
Before the El Nino, Lassen should have been seeing temperatures as below what his 1991 paper would predict. Only the 97-98 El Nino offered Lassen a data-based opportunity to bail on his now-contrarian earlier work and claim that temperatures had become too high, and he jumped right on it, even though the whole world knew that the temperature spike was due to an ocean oscillation and hence would be at least largely temporary.
This was a desperately unscientific move. While Lassen was still writing his update in mid ’99 temperatures had already dropped back to 1990 levels. His eight year update was outdated before it was published.
So how eager has Lassen been to update again, now that the next dozen years of data have strongly supported the solar warming theory? Crickets. The long solar cycle 23 has coincided with at least an extended pause in global warming and Mr. rush-to-update is nowhere to be found.
Bah, who needs him? Others have gone on to discover that the stronger correlation is between solar cycle length and global temperature change over the subsequent solar cycle, suggesting that the planet may now be heading into a cooling phase. That’s actually worth worrying about, but some people can only worry about fitting in.
The rating for Lassen, even though he is the most obsequious out of ten obsequious sets of consensus-conforming authors, doesn’t rate particularly high on Cook’s consensus scale. That’s because he doesn’t actually mention GHGs at all, but just claims that solar cycle length seems not to have been the cause of the 1990s warming, give him a 3:
Implicit Endorsement: abstract implies humans are causing global warming. E.g., research assumes greenhouse gases cause warming without explicitly stating humans are the cause.
Summary
Using John Cook’s highly specific criteria for identifying supporters of the “consensus” that people are causing dangerous global warming, my final tally is five abstracts with a rating of 4 (“neutral”), four with a rating of 2 (“explicit endorsement without quantification”), and one with a rating of 3 (“implicit endorsement”), yielding an average rating of 3.1 out of 7.
My broader assessment is that all the authors are flaming full-on consensoids. Mr. Cook is welcome to avail himself of this assessment if he thinks it strengthens the case for a genuine scientific consensus. I have a different interpretation. My anecdotal sampling, if it turns out to be representative, strongly supports the skeptical charge that climate science is thoroughly dominated by a tyrannical politically-fabricated and monetarily-enforced “consensus.”
Almost all the papers in my sample simply assume that big fat warming, implicitly anthropogenic is on the way, either building computer models based on this assumption, or projecting its various possible harmful consequences. Not one of the ten contains even a shred of evidence for the CO2-warming theory. The one paper that pretends to provide evidence against the competing solar-warming theory was a mad rush to misrepresent a temperature spike that the authors had every reason to believe was the product of ocean oscillations and they have since proven their bias by refusing to acknowledge that the much longer subsequent record of no warming strongly supports their original hypothesis of solar-driven climate change.
In all, a clear picture of ideological bullying, self-censorship and rent-seeking, exactly what we should expect from a politically created and controlled branch of science.
Postscript
Is it Cook’s plan to invoke the authors’ self-ratings as an accurate standard of what the ratings should be? He seems to already be doing this in the automated feedback that he returns to respondents when surveys are submitted.
It is possible that he plans to note in his write-up how the author ratings, being ratings for papers not abstracts, should show a systematically higher level of consensus conformity (a lower numerical rating) than an honest and careful surveyor’s ratings of the abstracts for the same papers. We’ll see, but I suspect that Cook is instead planning on using the systematically high author ratings to accuse participants from skeptic blogs of bias for estimating less consensus conformity than the authors themselves.
At the same time he can count on participants from credulous blogs to overrate the degree of consensus conformity, making them look less biased and more honest than skeptical participants when they are actually more biased and less honest. As we all know, and as Cook would know, there is widespread belief amongst climate alarmists that it is okay to misrepresent specifics in support of the “larger truth” that human impacts on the planet need to be dramatically curtailed. According to the late Stephen Schneider, one of the founding fathers of climate alarmism:
Each of us has to decide the right balance between being effective and being honest.
Participants from both sides will feel an urge to rate the abstracts as more consensus-conforming than the rating instructions say, just because the instructions are so conservative in the false-positive-avoiding sense. Paper after paper that has obviously sprung from the “consensus” womb is to have its abstract rated “neutral.” Both sides will also feel a partisan interest in registering this actual degree of consensus conformity, with believers wanting to bolster their arguments from authority and skeptics being glad to document the blatantly unscientific groupthink in the peer-reviewed literature. But unlike skeptics, the consensus side believes that dishonesty in public debate can be both moral and efficacious, and Cook can use that.
One indication that he is thinking in this direction is the failure to mention in his invitation letter that different survey links were provided to each invitee. This omission suggests that he was trying to get away with something. In a court of law such behavior is taken as an indicator of “guilty conscience.” So what was his sneaky purpose?
This is indicated by the second thing he is not upfront about. He cites what would seem to be a “correct” estimate of what the ratings should be without noting that the self-estimates from the scientific authors are systematically biased vis a vis the survey questions. That extends the pattern of surreptitiousness and it suggests the purpose. Yeah, he was going to use this known-to-be-wrong “correct” rating to make the dishonest ratings from his consensoid compatriots look honest and skeptic ratings look biased.
He might not be able to get away with it now, but it is reasonable to suspect that this was the plan. Still, Cook’s effort need not be for nothing. The fact that the author ratings of the papers should be numerically lower than the ratings for the abstracts provides a simple test for which group of survey participants is less biased. The group that is more honest and self-controlled, giving in less to confirmation bias, will submit lower assessments of the degree of consensus conformity (numerically higher ratings).
In particular, these rating should be numerically high compared to the author’s self-ratings, so whichever group comes out closest to the author ratings should be more biased and less honest. I’d like to see that experiment go through, though it might be tainted now that people know Watts Up.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



@davidmhoffer
You need to restart your browser to get a new set of papers.
I agree with Henry Galt. I wouldn’t take the survey because it would help legitimize an exercise which its author’s record indicates will almost certainly be dishonest and the results will be used dishonestly to make me look, at best, biased.
Why on earth would I do that?
Majority is given when, out of a group of consulted people, more votes are given to one view than to other ones.
Unanimity happens when all voting people positively chose one view, without any exception.
Consensus will be reached when nobody expresses his or her opposition to one view.
All of this happens in opinion polls or formal votes, but not in science.
So, what is the fuss about grading scientific papers in the way they convey [or not] opinions?
Why would anyone write a paper on the “Possible Impact Of Global Warming On Cabbage Root Fly (delia-radicum) Activity In The UK,” assessing an obscure possible consequence of several degrees of global warming, if they didn’t believe the IPCC’s projections that human CO2 emissions will indeed cause such warming?
Because they wanted to study Delia Radicum and no-one else would fund the project.
Supposing that Cook really does have author-ratings for all the papers used for the survey, there is a major slip between the cup and the lip. The authors are described as rating the papers, while surveyors are asked to rate only the abstracts. This is critical because according to Cook’s rating rules the ratings are supposed to be based on what is or is not specifically mentioned. Obviously full papers discuss a lot more things than abstracts, especially unusually short abstracts.
The difference between the papers and their abstracts is the reason I chose not to complete the survey. It’s too easy for a rent seeking author to add a few platitudes about cagw in a paper’s abstract, when the the paper itself is dismissive of man’s impact.
mpainter says:
“Why would anyone attribute disinterested scientific motives to John Cook?”
gees, why would anyone attribute ANY scientific motive to John Cook ?
Henry Galt says:
Why A N Y O N E is having A N Y T H I N G to do with this “survey” is beyond me.
If I got an email from Cook asking me to do a survey, I would tell him, using the most expressive language I could muster, to go jump in the deepest lake he could find, with a diver’s belt on !!
The guy is probably the slimiest person around… even in CAGW ranks.
Michel says:
May 9, 2013 at 2:13 am
For one reason there is not a single one of those papers that provides the evidence that would stop all argument on the subject in its tracks.
The ‘thousands of papers by thousands of scientists’ that we always have shoved down our throats mostly show that there was some warming at the end of C20 and “it affected my pet project” or that (fantasized) future warming “will affect my pet project”.
I challenge any ‘believer’ to name, without the use of Google/Wikipedia/Etc, 32 famous scientists who support the notion that CS > 0 and that this will cause pain for the environment, humans or other lifeforms for any chosen 3 of these(off the top of my head and I have endeavoured to choose members from different fields):
Richard Lindzen
Ray Spencer
Nils-Axel Morner
Robert Carter
Roger Pielke Sr
Freeman Dyson
Henrik Svensmark
Ian Clark
that would get them close to the “97% of climate scientists agree…” bullshit that they spout every chance they get.
I would bet that they cannot even get to 32 famous alarmists in total, and they would all be members of ‘the team’ that got us in this fix in the first instance.
By ‘famous’ I mean; In a relevant field, very well known in the debate and regularly cited.
A survey worth doing?
Interesting – the survey I filled in had an “author average” of 2.6 too….
The Blackboard has an post ( “I Tried” by Brandon Shollenberger) that documents John Cook’s explanation of how his SQL picks 10 papers at random, selecting from 12,000 papers just those with abstracts of less than 1000 characters, and with author ratings.
John Cook’s query:
SELECT * FROM papers WHERE Self_Rating > 0 AND Abstract != ” AND LENGTH(Abstract) 0 AND Abstract != ” AND LENGTH(Abstract) 0
These are trivial queries for JC to run, and the answers would go a long way to reassuring sceptical observers about the integrity of the survey.
Some people have a pathological need to create a ‘consensus’ when there isn’t one. It is their mission in life.
It is curious that Cook is even attempting this survey. It is flawed regardless of what he does. For a survey to be statistically valid, first the respondents have to be random. Second, the respondents cannot know who is conducting it (to remove any bias pro of con). Third it has to be objective.
The bias against Cook (and for him in some circles) immediately invalidates the survey. Many refused to take it (thus eliminating the randomness), and others are biasing their responses based upon a preconceived opinion of the author. And finally, both the respondents answers – and the survey authors rating system are not objective, but highly subjective.
It is a beauty contest. nothing more.
MOD – previous post got mangled. Please delete.Trying again without angle brackets….
Interesting – the survey I filled in had an “author average” of 2.6 too….
The Blackboard has a post ( “I Tried” by Brandon Shollenberger) that documents John Cook’s explanation of how his SQL picks 10 papers at random, selecting from 12,000 papers just those with abstracts of less than 1000 characters, and with author ratings.
Replace } with “greater than” symbol and { with “less than” symbol.
John Cook’s query:
SELECT * FROM papers WHERE
Self_Rating } 0 AND Abstract != ” AND LENGTH(Abstract) { 1000 ORDER BY RAND() LIMIT 10
I posted a modified query which would reveal the actual count of papers “in scope” – those with an author rating and an abstract of less than 1000 characters.
SELECT Count(*) as PapersInScope FROM papers
WHERE Self_Rating } 0 AND Abstract != ” AND LENGTH(Abstract) { 1000
Some other nuggets can be mined from this database.
For example, publication of the actual author-ratings, paper by paper. This could be analysed to detect any biases in climate paper authors….. For transparency’s sake, of course.
SELECT * FROM papers WHERE Self_Rating } 0
Anthony – these are trivial queries for JC to run, and the answers would go a long way to reassuring sceptical observers about the integrity of the survey.
Cook didn’t need to survey the authors, he’s got a model! And I bet 2.6 is upwardly biased.
statistics of skeptics scores of interpretations of a scorelist about abstracts of papers about a complicated subject (climate) compared with statistics of non-skeptics scores of interpretations of a scorelist about abstracts of papers about a complicated subject (climate) against the background of a score writers give about the papers about a complicated subject (climate)
very interesting…
thingodonta says:
May 9, 2013 at 4:19 am
Some people have a pathological need to respond to surveys. Beats me.
Nothing good could come from this.
According to the crowd-funding post on the paper at SkS, Cook et al asked “thousands” of authors to rate their papers. As their is an authors’ score for all submissions to the public survey (which is not a focus of the paper), one presumes they have only made public those papers which were rated by the authors.
According to the results posted by SkS regulars, they have applied the rules properly. EVERY result posted has so far been less confoming to the consensus assumptions than the author ratings. A remarkably unbiased reult.
I took the survey and t seems I applied the rules correctly, too. I scored 3.5 against the authors’ 3.3. (My take was closer to neutrality than the authors).
A number of the participants read the full papers the abstracts came from, and all those agreed that the full papers were less neutral, and more often closer to the consensus than the abstracts.
All the above are expected results if everyone was doing an honest appraisal.
There is no bias, except what you make. You should have marked all those abstracts neutral. What was the score for the authors?
Yes, better to get someone else to do it. A neutral, critical mind won’t presume an outcome.
It’s not a focus of the paer, but I am a little interested in how the public understands the abstracts they read, and whether abstracts are more or less neutral than the papers they head. It’s more of a social study than the consensus review, and there is nothing wrong with that. There are polls here, and crowd-sourcing and at other climate blogs, too. Not worth breaking a sweat over.
“It’s more of a social study than [a] consensus review”
That seems to be the opinion of regulars at SkS on the public survey, too. The comments are worth reading – for a while they also were confused about the project.
Alec, I admire your effort as well as the many others who posted about Cook’s survey, however how I would like to see this energy canalized now in a different direction.
I have seen a couple of posts in the direction, however the question would be how can we gather and fuel the energy to make our “own” survey?
Hopefully so much posts and discussion will take place if any skeptic blog starts its own survey.
I think the time is good to make several such surveys, and it will be a lot of fun to dissect these. Think at it, we will have all the data available to look at it, analyse it, play with it!
Frankly speaking what can we expect from Cook very different as what we did get from he and Lew? Some new Lew papers? With no data? I expect only some cooked survey without data available (eventually some collected data after being purged) and also targeting only skeptics, however it would be much more interesting to target the whole “climate community”.
Would like to see how well do answers of warmista correlate with conspiracy theory like fossil fuel financed skeptics, genetics engineering evil companies, nuclear holocaust through accident, Nibiru/death star believers, 9/11 truthers, moon landing theorists, as well as temperature projections in °C per 100 years, how many meters do they believe the sea level will rise in 100 years and so on.
Also would be very interesting to see their correlation with Gaya theories, malthusian theories and so on.
Btw many still have difficulties in naming warmista and skeptics. Somewhere somebody is still unhappy with it. I would propose Cagewista versus Cagenix. It is a more neutral naming for both where Cagewista is CAGW-supporter and Cagenix – is one who does not believe in the catastrophy of AGW. This could be a separate crowd sourcing project where the best acceptable denomination may win :).
This is a such wide field and it is really a pity that we miss to gather the data. The future generations will try to understand what was happening and without data we have only our theories and the warmista “data”…
Given the data that are collected in the survey, Cook may really be trying to find a correlation between which fonts are installed on your browser vs. evaluation of climate papers. Or possibly which browser plugins.
I’m looking for a few people to give me feedback on a plugin to reduce fingerprinting of his survey:
http://rankexploits.com/musings/2013/survey-privacy-plugin/
(Also available here. But you might want to read the discussion at my blog.)
If you use a pseudonym and/or proxy or whish to remain anonymous in anyway, it is best you avoid having your browser fingerprint recorded. So, using the privacy plugin would permit you to fill out the survey while disguising the browser fingerprint data being recorded.
I also advise people select more or less randomly from the query strings that have been published:
http://rankexploits.com/musings/2013/links-to-john-cooks-survey/
If you do get blocked from my site and want to visit my email is: my first name (you know what that is.) @ur momisugly rankexploits.com
“Lewd” Lewandowsky and John Cook-The-Books
Their time of the month is all of the time
Two screaming screws too obsessed to observe
Even the modest decorums
Of a pseudo-science
Faux posers, like drag queens on a runway
Theirs the “Fashionism” of the future?
Hissy fit data deviates
Who would dress our children
Two screaming shrews
sigh —
“Lewd” Lewandowsky and John Cook-The-Books
Their time of the month is all of the time
Two screaming shrews too obsessed to observe
Even the modest decorums
Of a pseudo-science
Faux posers, like drag queens on a runway
Theirs the “Fashionism” of the future?
Hissy fit data deviates
Who would dress our children
Projective upon all whom they survey
Theirs a deconstructive analysis
“Souls undone undoing others”
Perverts taught and teaching
Eugene WR Gallun