Brandon Shollenberger writes: I’ve been mulling over an idea I had, and I wanted to get some public feedback. What would you think of a public re-analysis of the Cook et al data set?
A key criticism of the Cook et al paper is they didn’t define the “consensus” they were looking for. There’s a lot of confusion as to whether that “consensus” position is weak (e.g. the greenhouse effect is real) or strong (e.g. humans are the primary culprits). The reason for that is Cook et al tried to combine both definitions into one rating, meaning they had no real definition.
You can see a discussion of that here.
I think it’d be interesting to examine the same data with sensible definitions. Instead of saying there’s a “97% consensus,” we could say “X% believe in global warming, Y% say humans are responsible for Z% of it.” That’d be far more informative. It’d also let us see if rating abstracts is even a plausibly useful approach for measuring a consensus.
My current thinking is to create a web site where people will be able to create accounts, log in and rate a particular subsample of the Cook et al data. I’m thinking 100 “Endorse AGW” abstracts to start with should be enough. After enough ratings have been submitted (or enough time has passed), I’ll break off the ratings, post results and start ratings on another set of abstracts.
The results would allow us to see tallies of how each abstract was rated (contrasted with the Cook et al ratings). I’m thinking I’d also allow raters to leave comments on abstracts to explain themselves, and these would be displayed as well. Finally, individual raters’ ratings could be viewed on a page to look for systematic differences in views.
What do you guys think? Would you be interested in something like this? Do you have things you’d like added or removed from it? Most importantly, do you think it’d be worth the effort? I’d be happy to create it, but it would take a fair amount of time and effort. It’d also take some money for hosting costs. I’d like to have an idea of if it’d be worth it.
An added bonus to doing it would be I could move my blog to that site as well. Self-hosting WordPress takes more effort than using WordPress.com, but it allows for far more customization. I’d love that.
So, thoughts? Questions? Concerns?
By the way, don’t hesitate to tell me I’m a fool if you think I’m spending too much time on the Cook et al issue. I’ve been telling myself that for the last two weeks.
Source: http://hiizuru.wordpress.com/2014/05/20/a-re-analysis-of-the-consensus/
===============================================================
My opinion is that given the vast number of people interested in this at WUWT, we could likely crowd-source this work much more accurately and quickly than Cook did, without having to fall back on a small cadre of “like minded friends”. Both sides of the aisle can participate.
I don’t know what the result will be of such an analysis proposed by Brandon, but I do know that we can get far more participants from a much broader venue (since WUWT has almost an order of magnitude more reach than “Skeptical Science”) and that Brandon’s attention to detail will be an asset.
We already know many of the mistakes made in Cook’s work, so a re-do has the advantage out of the gate. This disadvantage may be that the gatekeepers at IOP may refuse to publish it, and University of Queensland may publish yet another bogus legal threat, since they seem tickled that Cooks 97% is the subject of worldwide gossip – Anthony
The 97 percent is often repeated in the Media. Moreover, it seems to have a strong effect. So, a “proper” evaluation of the source data could be valuable.
The concept of breaking down as suggested is good.
Suggest: Also read “Conclusions” sections of reports — as abstracts can be misleading.
Suggest: Find a way to look at all 11000 or so reports that were supposedly considered. Anything less can be “cherry picking”.
Suggest: Contact authors (as suggested above) to see if they agree with ratings of their papers (do not have them rate there own papers). However, due to “political correctness” and “self preservation” affecting their answers, this aspect might be a waste of time.
A searchable, reliable database on (the few?) climate papers that venture a guess at human induced climate change would be useful. That would take some real effort though, more than Cook et al were ever willing to put into it. –AGF
Reanalyze? Yes. By crowd-source? No. A proper examination of the data would be very helpful. A crowd-source would be a free-for-all mash up of ideas, which isn’t good for finding the truth.
What does it matter what people ‘believe’ if there is no evidence to support the belief?
Show me evidence of unprecedented and catastrophic global warming (or global climate change or global climate chaos).
Then show me evidence that it is man’s emissions of carbon dioxide that are causing unprecedented and catastrophic global warming (or global climate change or global climate chaos).
more soylent green! says:
May 21, 2014 at 9:15 am
I’m certain many, many, many papers endorse AGW. Anecdotes suggest endorsing AGW or working “climate change” into a grant request is a near never-fail way to get funded.
The real question is — What evidence is being offered to support AGW?
Couldn’t agree more!!! How many studies were of the form – how will the future climate harm the XXXX or the habitat of the XXXX.
This will be a waste of time. The best outcome will be to show Cook is a fraud – but we already know that –
It would be interesting to see trends (cf. Windsong)
How many warmists became lukewarmers, and how many lukewarmers became sceptics?
OTOH, how many lukewarmers or sceptics became warmists?
I’ll bet the vast majority of papers say something like this, “Given the great concern over climate change and its potentially devastating effects, we decided to study frog mating habits in ponds of different temperatures…” In other words, the authors don’t assert whether CO2-induced climate change is occurring, nor even whether or not they have the proper understanding to make a judgement on whether CO2-induced climate change is occurring, but they use the “great concern” as an excuse to fund research on whatever it is they are interested in researching.
Not worth the effort.
Fallacies do not cease to be fallacies because they become fashoins.
G.K. Chesterton.
Extraordinary claims require extraordinary proof…not a convoluted computer program that does not match actual data.
cook’s project was a troll in the first place.
obviously, it’s still quite potent and has sucked you in.
what good is it besides inflating your notion of personal relevance?
it won’t stop the juggernaut.
but it will keep you busy spinning.
and next time, if there is one, just post the links to it so everybody can ‘hack’ it, why not?
this isn’t worth more than 15 seconds, imo. you did what there was to be done.
you asked, so i suggest you move along. dwelling on this = boring.
I’d say move on, the consensus will stick with their 97% figure regardless, it seems a bit of a waste of energy to me.
How could this be done without appearing to support that idea that the existence or nonexistence of a consensus among “scientists” is scientifically relevant? If 99% scientists believe X and the 1% believing Y turn out to be correct, would the 99% prevail? Should science be based on majority rule?
I remember as a kid looking at the global map. The way that the continents fit together jumped out at me. Later I learned that there were a few scientists talking about continental drift. I was a “denier” even back then.
Strange mix of responses. Why would anyone oppose doing this?
Brandon has identified the obvious problem and may, or may not, realize that obfuscating the very information he seeks is a deliberate part of the strategy, and the participation of Dr. Lewandowsky, a psychologist, *is* narrowing the result to a single number with magical, emotional properties. He makes no secret of it; the entire study exists for a single purpose, to convey to the public that scientists are agreed on this dire emergency.
Consider 100 percent — easy to achieve (“100 percent of all abstracts that claim human activity is the primary cause of global warming declare that human activity is the primary cause of global warming”) but not very believable. Also, declaring 100 percent eliminates the “enemy”.
Consider 99 percent — it has been taken by “Occupy Wall Street”.
Consider 98 percent — well just skip to 97, it is a PRIME NUMBER and ends with the lucky number “7”. Not only that, but it leaves some room for Goldstein — the 3 percent.
The enemy of the 99 percent is the 1 percent. It is a perfectly arbitrary cutoff but it creates a nice “us versus them” propaganda talking point.
The enemy of the 97 percent is the 3 percent. It is amazing how many people you can pack into 3 percent.
FOR THE RECORD, I am in the 100 percent. So are you. Welcome to my club!
The BATTLE is over the fixed nature of “97”. If it can be shown to be fuzzy, that’s bad for the “message”.
As I have written elsewhere, it hardly matters what is the percentage — what exactly has been proven? Two huge possibilities exist:
1. Scientists are perfectly free to study whatever they like and still get paid for it. (Good science, reports results where ever they lead, no a-priori bias)
OR
2. Scientists are paid to study specific things, and NOT paid to study NOT THINGS. (Not so good. Bias is inherent in the process).
The dissymmetry is not easy for me to grasp or articulate. For any observation, a single cause exists or is the primary cause; an infinite number of other possibilities are NOT the cause. It is not rational to study any of these “not causes”, consequently no papers will exist.
Suppose I was fascinated by blue-winged moths. I hired 15 researchers to study blue-winged moths. Eventually they turn in their reports. Along comes some college students and they do a survey on “moths” and conclude “97 percent of all papers on moths agree that they have blue wings”.
But that is because it is the topic of the study. It doesn’t exclude brown winged moths, they simply were not studied and consequently do not show up in a list of abstracts. You cannot prove the existence of brown winged moths by doing a survey of abstracts 97 percent of which were about blue winged moths.
NOW THEN, if 15 researchers were hired to study “moths” without any preconception of wing color, SOME of the papers would report blue winged moths, some white wings, some brown wings, and so on — it would be somewhat representative of the various species and then, and only then, would a survey of abstracts be “meaningful”.
So the missing, but CRUCIAL, part of this study is detecting whether the whole entire thing is tainted by confirmation bias — these 75 or so AGW asserting abstracts, are they the result of specifically studying for AGW? If so, then ALL of them should assert AGW — but only in varying degree of “A”.
WHAT was studied? Did the science proceed in an unbiased way, with no preconception what the outcome might be? I cannot see how such a thing is possible. To get a grant you must write a proposal, and you must have a hypothesis that you are proving or disproving.
HYPOTHESIS: Humans are the primary cause of global warming.
1. Papers that confirm the hypothesis will be funded and published and included in the Cook survey of abstracts.
2. Papers that do not confirm the hypothesis won’t be considered an AGW paper! They might still get published but won’t be included in the filtered set of AGW abstracts.
THE ACT of choosing which papers define the “consensus” IS the consensus!
So, a better proposal — and more difficult — is to take those 75 or so papers and go all the way back to their funding and hypothesis. WHAT PERCENTAGE of AGW papers were funded to study AGW specifically, a thing presumed to exist and needing only to quantify?
Conversely, inspect some of the “discarded” papers in the Cook study — 11,000 or so mention global warming or climate change but did NOT assert AGW. That’s incredible. He’s right about one thing — only papers that actually study the causes of climate change should be counted BUT to assume apriori that only AGW papers will be counted is a serious confirmation bias.
In other words, any “GW” paper should be included as the baseline from which “+AGW” papers become a percentage.
Brandon’s idea of a database makes good sense. It can even “hang out there” for people to run SQL (Structured Query Language) Select queries to their heart’s content. Columns would include title of paper, category, degree of relevance to climate change (0 to 10 from not relevant to highly focused), degree of assertion of natural cause, degree of assertion of human case (they are not rival; a paper might make no assertions of cause, or it might be very specific about natural causes AND human causes).
These numerical relevance factors would be an aggregate of reviewers. Each reviewer would be offered an abstract, and maybe its full paper, at random. He can refuse to review it of course by simply not proceeding but it should not be permitted to discard until you find one you “like”.
MY results would be tabulated in a different SQL table. Daily, a job would average all the reviews for a particular paper and store that into the appropriate fields of that particular paper.
Thus at any time the database could be queried and the results slowly converge upon the “consensus of public reviewers” the details of which will be available if anyone wants it.
It would actually be rather easy to set up. Your typical WordPress site is Linux and WordPress already uses MySQL, a very good and free database engine using standard SQL syntax.
To develop a consensus percentage you’d ask two questions:
Select count(*) from papers where climatechange > ‘5’;
That gets the baseline number, papers primarily focused on climate change.
Select count(*) from papers where climatechange > ‘5’ and humancaused > ‘5’;
This gives a subset of those climate change papers that are AGW affirmative.
Then do a division. There’s your percentage. By changing the filter parameters you’ll get different percentages.
But what does it MEAN?
Well then you get a list. Repeat the query but without the “count”. Start looking.
The database ought also to have some columns for funding source, funding amount, hosting institution (university, corporation, etc), authors and co-authors (to help discover alliances and mutual back-scratching arrangements, independance or lack thereof, answer the question of how many scientists are there anyway?).
Since Scopus is already a database it might be that this whole thing is “moot” and can simply be done on databases that already exist. What won’t be done is public ratings.
We need to find out how many climate scientists think catastrophic warming is imminent, how many think the warming will be problematic (but not catastrophic), how many think it will be moderate and easily adapted to, how many think it will be minor and not worth worrying about.
I think the warmists would rightly fear an accurate study that told us all that.
And please, let’s all stop using the misleading term “climate change” as a substitute for the more specific and accurate “catastrophic anthropogenic climate change” (CACC) or “anthropogenic climate change.” (ACC) It’s nothing more than manipulation of the language to give the mis-informers the advantage in what has heretofore been a rigged debate.
go for it
One of the huge issues I have with the Cook data is that they went back to 1991. Why use papers from over 2 decades ago, in a field that has(for those looking) generated tremendous empirical data to contradict almost unopposed theory at the time?
This actually defines the problem we face today, over 2 decades later. We have too many climate scientists that wrote papers or had “faith” in global climate models in the 1990’s, that have not made the right adjustments to incorporate real world data the past 15 years.
Looking for hiding heat or short term, unusual(natural) offsetting theories to account for the lack of recent warming but allows for accepting the same longer term theory as projected by global climate models with high sensitivity to CO2 is the biggest problem right now.
It is very likely that whatever number comes up from a new survey, it will be MUCH lower than Cooks because:
1. It will be less biased
2. All the objective empirical data from the real world has been going against CAGW theory and some scientists are acknowledging that.
However, my problem is with the large number of scientists still riding the gravy train and/or influenced by the many cognitive bias’s innate in all humans. Coming out with a study that includes this group as part of a consensus, sends the wrong message.
Here are a couple of things to consider::
1. When the lower number comes out. Look everybody, the consensus has dropped from 97% to 65% or 59% or 77% and it proves the other one was bogus or that the consensus is strongly shifting to the debate is not over after all or
2. The skeptics did their own study and even that one shows a majority(even if it is smaller).
3. Remember, if one is using published research or peer reviewed papers, it will strongly reflect the biased peer reviewed process.
4. I’ve been an operational meteorologist for 32 years, forecasting the effects of global weather on crops and energy use the last 21 years. I’ve been on top of climate science for 15 years. Since I don’t have any peer reviewed papers and have not obtained government funding for any research, my view and those of people like me(greatly weighted towards the real world and empirical data/evidence) will not be in a survey that measures this. Again, the survey will be skewed towards those using theory or models………….which gets published. That IS the problem.
5. I do like the idea of specific questions to establish “sensible definitions” that might be able to sort things out a bit. Maybe show that papers relying the most on theory or models or speculating, show X and those based just on empirical evidence gathered show Y. If that could be done, it would reveal the problem
Hi Brandon,
I was thinking along the same lines, but did not (and will never) have the time to make it happen. In summary I think:
– yes, it is worth re-doing this, because the Cook paper is causing much mischief
– yes, I think crowd-sourcing is the solution
– there will be concerns about the quality of the results. My understanding is that crowd-sourcing results can be reliable if multiple answers are collected, and appropriate statistical strategies are taken to weed out biases and incompetence. We would have to design the process quite carefully with that in mind. Do you have the expertise for that?
– would it make sense to restrict the respondents to people with advanced degrees, because they are familiar with interpreting abstracts? I think many of the WUWT followers fit into that category (e.g. I have a PhD in Physics).
Thanks – I will certainly participate if you proceed.
Mike
Also: it appears to me that Cook counted abstracts, not people? Could our solution be slightly more sopisticated and decide, for each author, whether they mostly agree, sometimes agree, or never agree with AGW?
Considering that many authors have more than one paper, how many times were they counted in this study as part of that 97%?
Countering the 97% meme is job #1 for our side. This is one way to do it–to demonstrate that Cook loaded the dice. As long as each evaluated abstract is posted alongside its rating, with the rater’s rationale given, and with room for comments, this won’t go far wrong. Ideally, the Cook rater’s evaluation could also be included in that tidbit.
Not if they’re randomly selected–by choosing papers whose authors or titles begin with a certain letter, for instance.
Cook is a UofQ science fellow?
I’d puzzled over Cook’s financing a few times. One normally earns little money running a lame website with a small number of adherents.
Now revealing that Cook’s work is UofQ’s work and that Cook is one of their science fellow reveals some evidence where Cook was obtaining funds.
A topic that certainly bears relevance to the differences between Steven Burnett’s “hard science and soft science”. What are UofQ’s standards for allocating fellowships?
Wait, wait, wait. You’re going to do WHAT? You’re going to put lipstick on Kook’s pig? And then you’re going to kiss it? Count me out.
There’s a major issue with any analysis of the literature in that:
1. It’s subject to publication bias.
2. It overweights prolific publishers.
3. It’s influenced by what are active areas of research rather than what has been solidly concluded.
4. Most papers are on other topics – even an ardent believer or disbeliever might not write about it in their paper if it’s not relevant to what they’re doing.
5. People might publish conclusions that don’t match their own opinions for various reasons – including the ‘declaration of faith’ to get it published, or because the dataset you happen to be reporting on goes against the general pattern.
6. It’s not measuring the variable anybody is really interested in, or reports.
A much better idea would be to replicate and extend the Bray/von Storch surveys. They ask the right sort of questions (although I’m sure we could recommend improvements), they ask actual people rather than try to reconstruct from paper proxies, and they ask a much broader category of scientists.
I’d be interested in seeing the survey extended to all scientists and engineers, and show the breakdown by scientific subject. (e.g. are chemists notably sceptical, as some have suggested?) I’d like to see more detail on the finer gradations around detection/attribution. I’d like to see more detail on their beliefs about danger/damage – do they believe it to be the end of the world, or a minor annoyance? And most of all, I’d like to know *why* they believe what they do and where they got their information from. Did they trust the experts, or did they download and examine data themselves? What do they think the reasons/evidence actually are? Do they know?
I’d also chuck in a few technical questions to test their climate knowledge (a far better measure than counting papers published for judging actual expertise). A plot of belief versus competence might be interesting.
The main issue with this idea, of course, will be getting an unbiased sample of scientists to participate in a survey run by climate sceptics, so I don’t regard this as easy. Internet surveys are far too easy to influence, even unintentionally. But it’s an important question for understanding the *sociology* of science, and of scientific controversies.
And maybe if we do it properly and know what the real answer is, they’ll stop with all the 97% rubbish. They can only get away with it because nobody knows what the actual answer is.
And for that matter, I’d be quite interested in the breakdown of opinion among sceptics, too.
Donna’s crowd-sourced 2009 IPCC audit addressed most up-thread concerns; I found her protocols satifactory.
“Advice, Donna?”
………………
Thank you for the UQ [Unknown Quotient/Quintile?] link, where we find this claim:
” He created SkepticalScience.com, a website that refutes climate misinformation with peer-reviewed science. “
What do you think of the emperor’s new clothes?
We paraded the emperor, surveyed 1,000 leading citizens, rated them, racked them, stacked them. Results came in:
A: very nice threads 67.1%%
B: would have preferred tweed 32.8%
C: he’s buck naked, what’s wrong with you people 0.1%
“If I were wrong, one would have been enough”
Response to a Nazi pamphlet entitled “100 Authors against Einstein
Utterly pointless exercise. Let Cook stew.