UPDATE: More inconsistency:
Cook survey included 10 of my 122 eligible papers. 5/10 were rated incorrectly. 4/5 were rated as endorse rather than neutral.
— Richard Tol (@RichardTol) May 22, 2013
===========================================
When asked about the categorizations of Cook et al, – “It would be incorrect to claim that our paper was an endorsement of CO2-induced global warming”
Guest essay by Andrew of Popular Technology
The paper, Cook et al. (2013) ‘Quantifying the consensus on anthropogenic global warming in the scientific literature‘ searched the Web of Science for the phrases “global warming” and “global climate change” then categorizing these results to their alleged level of endorsement of AGW. These results were then used to allege a 97% consensus on human-caused global warming.
To get to the truth, I emailed a sample of scientists whose papers were used in the study and asked them if the categorization by Cook et al. (2013) is an accurate representation of their paper. Their responses are eye opening and evidence that the Cook et al. (2013) team falsely classified scientists’ papers as “endorsing AGW”, apparently believing to know more about the papers than their authors.

Dr. Idso, your paper ‘Ultra-enhanced spring branch growth in CO2-enriched trees: can it alter the phase of the atmosphere’s seasonal CO2 cycle?‘ is categorized by Cook et al. (2013) as; “Implicitly endorsing AGW without minimizing it“.
Is this an accurate representation of your paper?
Idso: “That is not an accurate representation of my paper. The papers examined how the rise in atmospheric CO2 could be inducing a phase advance in the spring portion of the atmosphere’s seasonal CO2 cycle. Other literature had previously claimed a measured advance was due to rising temperatures, but we showed that it was quite likely the rise in atmospheric CO2 itself was responsible for the lion’s share of the change. It would be incorrect to claim that our paper was an endorsement of CO2-induced global warming.”

Dr. Scafetta, your paper ‘Phenomenological solar contribution to the 1900–2000 global surface warming‘ is categorized by Cook et al. (2013) as; “Explicitly endorses and quantifies AGW as 50+%“
Is this an accurate representation of your paper?
Scafetta: “Cook et al. (2013) is based on a strawman argument because it does not correctly define the IPCC AGW theory, which is NOT that human emissions have contributed 50%+ of the global warming since 1900 but that almost 90-100% of the observed global warming was induced by human emission.
What my papers say is that the IPCC view is erroneous because about 40-70% of the global warming observed from 1900 to 2000 was induced by the sun. This implies that the true climate sensitivity to CO2 doubling is likely around 1.5 C or less, and that the 21st century projections must be reduced by at least a factor of 2 or more. Of that the sun contributed (more or less) as much as the anthropogenic forcings.
The “less” claim is based on alternative solar models (e.g. ACRIM instead of PMOD) and also on the observation that part of the observed global warming might be due to urban heat island effect, and not to CO2.
By using the 50% borderline a lot of so-called “skeptical works” including some of mine are included in their 97%.”
Any further comment on the Cook et al. (2013) paper?
Scafetta: “Please note that it is very important to clarify that the AGW advocated by the IPCC has always claimed that 90-100% of the warming observed since 1900 is due to anthropogenic emissions. While critics like me have always claimed that the data would approximately indicate a 50-50 natural-anthropogenic contribution at most.
What it is observed right now is utter dishonesty by the IPCC advocates. Instead of apologizing and honestly acknowledging that the AGW theory as advocated by the IPCC is wrong because based on climate models that poorly reconstruct the solar signature and do not reproduce the natural oscillations of the climate (AMO, PDO, NAO etc.) and honestly acknowledging that the truth, as it is emerging, is closer to what claimed by IPCC critics like me since 2005, these people are trying to get the credit.
They are gradually engaging into a metamorphosis process to save face.
Now they are misleadingly claiming that what they have always claimed was that AGW is quantified as 50+% of the total warming, so that once it will be clearer that AGW can only at most be quantified as 50% (without the “+”) of the total warming, they will still claim that they were sufficiently correct.
And in this way they will get the credit that they do not merit, and continue in defaming critics like me that actually demonstrated such a fact since 2005/2006.”

Dr. Shaviv, your paper ‘On climate response to changes in the cosmic ray flux and radiative budget‘ is categorized by Cook et al. (2013) as; “Explicitly endorses but does not quantify or minimise“
Is this an accurate representation of your paper?
Shaviv: “Nope… it is not an accurate representation. The paper shows that if cosmic rays are included in empirical climate sensitivity analyses, then one finds that different time scales consistently give a low climate sensitiviity. i.e., it supports the idea that cosmic rays affect the climate and that climate sensitivity is low. This means that part of the 20th century should be attributed to the increased solar activity and that 21st century warming under a business as usual scenario should be low (about 1°C).
I couldn’t write these things more explicitly in the paper because of the refereeing, however, you don’t have to be a genius to reach these conclusions from the paper.”
Any further comment on the Cook et al. (2013) paper?
Shaviv: “Science is not a democracy, even if the majority of scientists think one thing (and it translates to more papers saying so), they aren’t necessarily correct. Moreover, as you can see from the above example, the analysis itself is faulty, namely, it doesn’t even quantify correctly the number of scientists or the number of papers which endorse or diminish the importance of AGW.”
The Cook et al. (2013) study is obviously littered with falsely classified papers making its conclusions baseless and its promotion by those in the media misleading.
CVs of Scientists:
Craig D. Idso, B.S. Geography, Arizona State University (1994); M.S. Agronomy, University of Nebraska – Lincoln (1996); Ph.D. Geography (Thesis: “Amplitude and phase changes in the seasonal atmospheric CO₂ cycle in the Northern Hemisphere“), Arizona State University (1998); President, Center for the Study of Carbon Dioxide and Global Change (1998-2001); Climatology Researcher, Office of Climatology, Arizona State University (1999-2001); Director of Environmental Science, Peabody Energy (2001-2002); Lectured in Meteorology, Arizona State University; Lectured in Physical Geography, Mesa and Chandler-Gilbert Community Colleges; Member, American Association for the Advancement of Science (AAAS); Member, American Geophysical Union (AGU); Member, American Meteorological Society (AMS); Member, Arizona-Nevada Academy of Sciences (ANAS); Member, Association of American Geographers (AAG); Member, Ecological Society of America (ECA); Member, The Honor Society of Phi Kappa Phi; Chairman, Center for the Study of Carbon Dioxide and Global Change (2002-Present); Lead Author, Nongovernmental International Panel on Climate Change (2009-Present)
Nicola Scafetta, Laurea in Physics, Università di Pisa, Italy (1997); Ph.D. Physics (Thesis: “An entropic approach to the analysis of time series“), University of North Texas (2001); Research Associate, Physics Department, Duke University (2002-2004); Research Scientist, Physics Department, Duke University (2005-2009); Visiting Lecturer, University of North Carolina Chapel Hill (2008, 2010); Visiting Lecturer, University of North Carolina Greensboro (2008-2009); Adjunct Professor, Elon University (2010); Assistant Adjunct Professor, Duke University (2010-2012); Member, Editorial Board, Dataset Papers in Geosciences Journal; Member, American Physical Society (APS); Member, American Geophysical Union (AGU); Research Scientist, ACRIM Science Team (2010-Present)
Nir J. Shaviv, B.A Physics Summa Cum Laude, Israel Institute of Technology (1990); M.S Physics, Israel Institute of Technology (1994); Ph.D. Astrophysics (Thesis: “The Origin of Gamma Ray Bursts“), Israel Institute of Technology (1996); The Wolf Award for excellence in PhD studies (1996); Lee DuBridge Prize Fellow, Theoretical Astrophysics Group, California Institute of Technology (1996-1999); Post Doctoral Fellow, Canadian Institute for Theoretical Astrophysics, University of Toronto (1999-2001); The Beatrice Tremaine Award, Canadian Institute for Theoretical Astrophysics (2000); Senior Lecturer, Racah Institute of Physics, The Hebrew University of Jerusalem, Israel (2001-2006); The Siegfried Samuel Wolf Lectureship in nuclear physics, The Hebrew University of Jerusalem, Israel (2004); Associate Professor, Racah Institute of Physics, The Hebrew University of Jerusalem, Israel (2006-Present)
Wilcon says:
Can you show your understanding of the “various tests of statistical significance” that “are robust against that assumption” (of random sampling) by applying them to this nonrandom n=3 sample?
What nonrandom sample? Once again, that has not been established. That is just more of you making assumptions instead of inquiries. And your perseverance on N=3, as if that were meaningful in some negative way, indicates that you really don’t understand the maths involved.
Random sample or not, the information that PopTech has provided (and that Richard Tol has now expanded upon significantly) is important. From it, we know that Cook’s method excludes relevant publications, and gins up a consensus fallacy from people who are decidedly not part of that “consensus”. That Cook’s work is factually wrong is established by Poptech’s work. The only remaining question is the degree to which it is factually wrong, in addition to being 100% fallacious.
If you had any integrity, you would stop making up assumptions about PopTech’s methods, and acknowledge the invalidity of the methods to which Cook (in a mixture of ignorance and hubris) freely admits. That would include not only his methodological and statistical failings, but the epistemic errors as well – most glaringly the anti-science fallacy that is the basis of the entire exercise.
My impression that the consensus really is one comes from reading journals, going to meetings, talking to scientists. My impressions are anecdotes not data, but they seem consistent with data described in the published studies.
Given that you support Cook’s false and fallacious methods without question here, it is quite likely that you apply them yourself when forming your impresssions. As you do with Cook, you probably ignore self-selection bias, you likely deny the possibility of gatekeeping and blind yourself to the self-censoring that it engenders, you probably intentionally misinterpret silence or use of a hypothetical as “implicit endorsement”, and freely equivocate on the substance of the matter to which your imaginary consensus is allegedly agreeing. And you probably also publically support the notion of consensus as scientific, simultaneously applying social pressure to the end of creating a “consensus” while cynically denying that origin. Cook does all of these things, and you peep not.
Applying the same bad methods to the same dataset should give consistent results. That you believe that the consistency in results that comes from the consistent application of bad methods magically validates the methods is simply another example of the problem …
JJ
JJ says:
May 23, 2013 at 3:14 pm
“What nonrandom sample? Once again, that has not been established. That is just more of you making assumptions instead of inquiries.”
Wow, you ducked the question a fourth time! Wouldn’t it be more honest to admit you were bluffing, in all those places where you declared
– Poptech’s sampling was random?
– You did so know what “random sample” meant?
– You could “do the stats” and calculate a “population error rate” from his 3 responses?
– You understand “various tests of statistical significance” that “are robust against that assumption” (of random sampling)?
On the other hand I wasn’t bluffing. What is random sampling? (Short version) simple random sampling means that every element of the population has an equal chance of selection at each step. So there are two elements, (1) a defined population, about which we might want to draw inferences, and (2) some method to assure that each element has an equal chance of being chosen.
@Poptech, do your n=3 respondents represent a random sample? From what population? How did you give all elements of that population an equal probability of selection?
The abstracts part of the Cook study, as I read it, had a clearly defined population (papers found by searching Web of Science for two key phrases) and sampling scheme (all). That should be a pretty representative sample for that population, although anybody can argue they want a different population.
Their author survey appears to start out from a reasonably clear population as well: (all?) first or corresponding authors with email addresses among those abstracts. Not all of them replied, however. Nonresponse was probably nonrandom, and how much this biased the conclusions should be an empirical questions. Maybe not much, since the self-rating and abstract-rating percentages are similar.
Perhaps Richard Tol or somebody can design a better sampling scheme, that he can test to find out whether it gives different results.
As of right now I am not revealing anything more about my methods because it relates to other projects I am working on outside of that those 3 are the only responses I received so far and I have emailed many more scientists. The data for Cook et al. (2013) has been shown to be wrong and thus puts in question the entire study’s accuracy.
Their author self-survey only returned 14% of those they contacted. I am very confident non response to the self-survey was due to the AGW position held by the author. The differences in the self-ratings should have been a warning that their own ratings were wrong.
populartechnology says:
May 23, 2013 at 4:19 pm
“The data for Cook et al. (2013) has been shown to be wrong and thus puts in question the entire study’s accuracy.”
All data contains errors. How much error, and how does it affect the conclusions? To answer that you need data or analysis too. Being careful about errors.
“The differences in the self-ratings should have been a warning that their own ratings were wrong.”
They said that in the paper. The difference suggested their own ratings were too conservative. Which makes sense since they were just based on the abstracts, and full papers give more information. It makes no sense to imagine the abstract ratings were error free, and the paper clearly admits that.
“I am very confident non response to the self-survey was due to the AGW position held by the author.”
Maybe your confidence is correct, but the author survey results still agree with the abstract ratings in support for AGW. And also with previous studies done by others, using different sampling and methods.
You have a better way, do the work and publish the paper so people can check. It’s not impressive here that you launch an attack against their methods, then hide your own saying “I am not revealing anything more about my methods” when questioned.
The names of the scientists are not hidden so their statements can be verified. I am not publishing a paper so I don’t have to do anything, let alone give out information relating to future projects I am working on. You just like Skeptical Science will have to sweat it out.
The author’s surveys only support the notion that they falsely classified scientist’s papers since they are not identical. Wrong information does not become correct because certain percentages are similar.
populartechnology says:
May 23, 2013 at 6:20 pm
“You just like Skeptical Science will have to sweat it out.”
Relax, I’m not sweating anything out. It will be interesting to see your research, when you bring it into the light.
“Wrong information does not become correct because certain percentages are similar.”
Of course not. But multiple methods reaching similar conclusions tend to undermine the individual critiques. If you are planning original research and not just more fault-picking, and you think it will reach different conclusions — that too will face tests of replication.
I disagree that multiple methods reached similar conclusions as Cook et al.’s method was not identical to the author’s self surveys.
Cook’s methods have already failed very basic tests of replication.
It is absolutely hilarious some of the papers I am finding classified as “endorsing AGW”.
Wilcon says:
Wouldn’t it be more honest to admit you were bluffing, in all those places where you declared
– Poptech’s sampling was random?
I did not say that once, let alone in “all of those places”. You people have a real problem with intentionally mischaracterizing what other people say.
“On the other hand I wasn’t bluffing.”
Yeah, ya are. Combined with a big dose of lying.
What is random sampling? (Short version) simple random sampling…
… is not the only kind of random sampling. Keep reading. You’ll get it eventually.
… means that every element of the population has an equal chance of selection at each step. So there are two elements, (1) a defined population, about which we might want to draw inferences, and (2) some method to assure that each element has an equal chance of being chosen.
And everything that you know about what poptech has done is consistent with that. Yet you bluff and bluster and pretend otherwise. Dishonest.
Maybe your confidence is correct, but the author survey results still agree with the abstract ratings in support for AGW.
No they don’t. Cook himself claims a minimum 50% error rate in the abstract ratings vs the author survey for a substantial subpopulation of his “survey.” He is very careful not to provide the specifics of that error assessment, lest the reader be able to draw proper inferences. However, from the information that he does provide we are able to determine that a minimum of 70% of sceptic papers in that subpopulation were miscategorized. This corroberates PopTech’s finding, and renders your dishonest objection to his work that much more egregious.
Meanwhile, C(r)ook’s paper is still founded in an anti-scientific fallacy. And he still supports it with methodological, statistical, and epistemologic err. He still intentionally mischaracterizes silence or use of a hypothetical as “implicit endorsement”. He still equivocates on the meaning of the subject matter of the imaginary consensus. And you still turn a blind eye to the C(r)ookery, while attacking others with bluff, bald assertions and lies. This is the stuff of the “consensus”.
JJ
JJ says:
May 23, 2013 at 11:04 pm
“Wilcon says:
Wouldn’t it be more honest to admit you were bluffing, in all those places where you declared
– Poptech’s sampling was random?
I did not say that once, let alone in “all of those places”. You people have a real problem with intentionally mischaracterizing what other people say.”
Ah, you cut off the rest of my list, so it looks like “all those places” refers to just one thing and you can call me a liar. And by cutting off three points you ducked yet again (five times now?) an invitation to show knowledge instead of airily declaring that you have it.
This was my real quote:
“Wow, you ducked the question a fourth time! Wouldn’t it be more honest to admit you were bluffing, in all those places where you declared
– Poptech’s sampling was random?
– You did so know what “random sample” meant?
– You could “do the stats” and calculate a “population error rate” from his 3 responses?
– You understand “various tests of statistical significance” that “are robust against that assumption” (of random sampling)?”
So you can’t support any of those statements you made, but I’m a liar because you didn’t say Poptech’s sampling was random?
JJ says:
May 22, 2013 at 10:56 am
“You also do not appear to grasp that a random sampling of that sub-population with respect to the matter of the question has been achieved … unless:”
Since you plainly don’t believe your “unless” parts, I took your statement to mean what it says. You thought that random sampling has been achieved. Although nothing of the sort has been done, not even Poptech will claim that. Your statement revealed you did not understand what random sampling is, despite claims to the contrary, so I defined the simplest case. You responded that simple random sampling
“… is not the only kind of random sampling. Keep reading. You’ll get it eventually.”
Enlighten me, I kept reading and don’t get it. Here you have yet another chance to show instead of declaring that you understand anything about statistics. What kind of non-simple random sampling do you see behind Poptech’s n=3? Do you think it was it cluster sampling, or stratified, or systematic with a random start point? Or a multistage combination? Something even more complex? What device assured random selection?
The sampling issue is not minor, it goes to the heart of whether Poptech’s approach has any merit as research. So far there’s no hint about his sampling, and that n=3 sample looks totally biased. But the Cook paper certainly has weaknesses, and I’m all in favor of replications to address those.
populartechnology says:
May 23, 2013 at 7:38 pm
“I disagree that multiple methods reached similar conclusions as Cook et al.’s method was not identical to the author’s self surveys.”
This comment makes no sense, of course the methods are not identical. That’s why I called them “multiple methods.”
I meant to say Cook et al.’s results were not identical to the author’s self surveys.
Dr. Morner weighs in,
http://www.populartechnology.net/2013/05/97-study-falsely-classifies-scientists.html
Dr. Morner, your paper ‘Estimating future sea level changes from past records’ is categorized by Cook et al. (2013) as having; “No Position on AGW”.
Is this an accurate representation of your paper?
ROFLMAO! This has to be painful for Cook and Company,
https://www.google.com/search?q=97%25+consensus
Once upon a time, the consensus opinion was that the earth was flat.
Wilcon says:
Ah, you cut off the rest of my list, so it looks like “all those places” refers to just one thing and you can call me a liar.
Sweetheart, you claimed I said something that I did not. That is what makes you a lair. Listing one of those lies was sufficient to demonsrate that fact.
… I’m a liar because you didn’t say Poptech’s sampling was random?
Yes, that is correct. You are a liar, and that is but one example. That you are fundamentally dishonest is also shown in the fact that you support C(r)ook’s use of fallacious agruments in the first place. When one starts off backing a lie, it is inevitable that more will follow.
So far there’s no hint about his sampling, …
How nice of you to finally admit that you have no basis for the claims you have been making. That has been a fundamental thesis in my discussion with you. Perhaps now you will stop telling lies to argue against it. I would not put money on that, however.
The sampling issue is not minor, it goes to the heart of whether Poptech’s approach has any merit as research.
Nonsense. The “sampling issue”, and the lies you tell in support of it, is simply a distraction to draw attention away from the stench coming from the pile of feces that the C(r)ook’s have put on the world’s doorstep.
Random sample or not, the information that PopTech has provided (and that Richard Tol has now expanded upon significantly) is important. From it, we know that Cook’s method excludes relevant publications, and gins up a consensus fallacy from people who are decidedly not part of that “consensus”. That Cook’s work is factually wrong is established by Poptech’s work. The only remaining question is the degree to which it is factually wrong, in addition to being 100% fallacious.
PopTech’;s work is supported by the information that C(r)ook was not able to hide when he wrote his appeal to authority fallacy. C(r)ook himself claims a minimum 50% error rate in the abstract ratings vs the author survey for a substantial subpopulation of his “survey.” He is very careful not to provide the specifics of that error assessment, lest the reader be able to draw proper inferences. However, from the information that C(r)ook does provide, we are able to determine that a minimum of 70% of sceptic papers in that subpopulation were miscategorized . This corroberates PopTech’s finding, and renders your dishonest objection to his work that much more egregious.
Meanwhile, C(r)ook’s paper is still founded in an anti-scientific fallacy. And you still support it. And C(r)ook’s paper still props up that fallacy with methodological, statistical, and epistemologic error. And you still support it. C(r)ook still intentionally mischaracterizes silence or use of a hypothetical as “implicit endorsement. And you still support it. C(r)ook still equivocates on the meaning of the subject matter of the imaginary consensus. And you still support it.
You still turn a blind eye to the C(r)ookery, while attacking others with bluff, bald assertions and lies for saying things that are consistent with the 50-70% minimum error rate to which C(r)ook himselfv admits. This is the stuff of the “consensus”.
JJ
More dissent from Cook et al. (2013)…
http://www.populartechnology.net/2013/05/97-study-falsely-classifies-scientists.html
Dr. Soon, your paper ‘Polar Bear Population Forecasts: A Public-Policy Forecasting Audit’ is categorized by Cook et al. (2013) as having; “No Position on AGW”.
Is this an accurate representation of your paper?
rebuttal from the folks who wrote the consensus paper-
http://www.skepticalscience.com/tcp.php?t=faq
“How did you independently check your results?
Nobody is more qualified to judge a paper’s intent than the actual scientists who authored the paper. To provide an independent measure of the level of consensus, we asked the scientists who authored the climate papers to rate the level of endorsement of their own papers. Among all papers that were self-rated as expressing a position on human-caused warming, 97.2% endorsed the consensus. This result is consistent with our abstract ratings, which found a 97.1% consensus… each abstract was rated by at least two separate raters, with any conflicts resolved by a third reviewer… The entire database of 12,464 papers is available in the Supplementary Material. We have also published all our abstract ratings, which are also available via a search form… We have also created an Interactive Rating System, encouraging people to rate the papers themselves and compare their ratings to ours.”
Ben, how is that a rebuttal? They falsely classified these scientists papers and the false classifications were published and conclusions derived from them.