I’ve been made aware of two different opinions on state of science publishing as it relates to peer review and the pressure to publish even faster due to the Internet and all of its “instalaunch” tools.
First, in Nature, a comment by Dr. Jerome Ravetz: Sociology of science: Keep standards high.
He argues for embracing the new medium, while maintaining quality:
As more people become involved in online debates, quality need not fall by the wayside. It is encouraging to see that well-conducted discussions of points of contention between the scientific mainstream and critics are emerging, as the Berkeley Earth Surface Temperature study demonstrates (see Nature 478, 428; 2011).
Ultimately, effective quality assurance depends on trust. And science relies on trust more than most institutions. As Steven Shapin, a historian of science at Harvard University in Cambridge, Massachusetts, showed in his 1994 book A Social History of Truth, trust is achieved and maintained only by mutual respect and civility of discourse. In a digital age, civility should be extended to, and reciprocated by, the extended peer community.
Scientists have a special responsibility, but also a special difficulty. When their training has been restricted to puzzles with just one right answer, scientists may find it hard to comprehend honest error, and may condemn those who persist in apparently wrong beliefs. But amid all the uncertainties of science in the digital age, if quality assurance is to be effective, this lesson of civility will need to be learned by us all.
Dr. Judith Curry has some thoughts on this here, she writes:
I am a fan of the concept of “extended peer community” put forth by Funtowicz and Ravetz. Also, Ravetz’s phrase “the radical implications of the blogosphere” has definitely stuck in my head. Re the civility issue, I agree some level of civility is needed. Some think that Climate Etc. is too raucous (a not infrequent complaint made at collide-a-scape). A fair place for an honest debate might not be especially courteous. But the blogosphere enables a range of different types of fora and moderation rules. The challenge is to extract signal from the noise. I am pleased that sociologists are studying this.
At the same time, we have an editorial in Nature Geoscience, Embargoes on the web stating that scientists are increasingly acting as reporters now, and as a result, sometimes run afoul of publication rules. I see this as a shot across the bow against such practice.
Now that researchers, too, are acting as reporters, the guideline for talking freely to scientists but not to journalists may sound contradictory. Who should count as a member of the media for the purpose of the Nature journals’ embargo policy? The same basic rule applies: if an author actively seeks media attention before publication, we consider this a breach of our embargo policy.
At the same time, it is important to Nature Geoscience and fellow Nature journals that the scientific debate does not stop while a paper is under consideration. This principle also remains: we want our authors to present and discuss their results at conferences and communicate them to their peers. So if someone in the audience — journalist or scientist — tweets or blogs about a talk, we will not consider it to be a breach of our pre-publication embargo (see also Nature 457, 1058; 2009).
Where they say:
…if an author actively seeks media attention before publication, we consider this a breach of our embargo policy.
This squarely applies to the pre-publication publicity stunts pulled by Dr. Richard Muller and his BEST team.
People wonder why I dropped my support for him (like the feckless Dr. Peter Glieck and his science B.S. of the year awards), the answer lies within the shenanigans he pulled after earning my trust to use my data. I had always expected my data to appear in a full peer reviewed publication, instead, Muller spewed it in Congress and in his own media blitz in releasing papers that hadn’t even run the peer review gauntlet.
It may take some time (and additional train wrecks like BEST) before scientists learn that they can be their own worst enemy with these sort of behaviors.
OTOH, I’ve been considering a web 2.0 peer review experiment of my own. WUWT now has the ability to offer a peer review service for articles and papers. It is a new feature I can activate into WordPress, and would allow comments by invited reviewers to be posted for authors prior to publication, so that articles can be evaluated by a broad base of techical readers prior to publication.
I welcome readers thoughts on this idea. – Anthony
Interestingly, Skeptical Science is reviewing their Comments Policy (see: http://www.skepticalscience.com/2012-SkS-Weekly-Digest_1.html).
One wonders whether they might consider toning down after the inflamed rhetoric of their B S awards.
I suspect not from the general levels of satisfaction expressed by most contributors. The success or failure of any endeavours at wider peer review depend greatly on commitment to courtesy and consistency.
Judith Curry to her credit manages a very civilised forum with very light moderation whilst accommodating quite disparate views. While clearly accepting the premises of AGW, she seems equally willing to acknowledge the multiple uncertainties in the science. Her latest thread, Too Big to Know (see http://judithcurry.com/2012/01/09/too-big-to-know/#more-6528), courageously tackles the complexities too many of us seek to avoid.
WUWT to its credit selects many articles presenting scientific work from a pro, anti, and neutral perspective vis-a-vis AGW. Unfortunately, many comments take an overly curmudgeonly contrarian outlook (“Not more models!” would be a very mild example) rather than a considered critique. It’s role as a venting space for folks who are p*d off with the monolithic AGW “consensus” can undermine its credibility. Judith by contrast seems to manage to engage folks in the nitty gritty of the scientific debate and its philosophical underpinnings whilst tolerating an eccentric visitor or two.
Why not look at a shared endeavour with someone like Judith? I suspect the two sites could learn a great deal from one another.
Dan In California has understood the problem. Ravetz’s conclusions are fine and dandy – about time – but for all the wrong sociological claptrap reasons. Science is not ‘based on trust’. That would be….err.. Theology.
Now here’s the thing. Science uses the ‘replication’ to which DIC refers, to test its hypotheses, sometimes with startling results. Planck, for example, simply could not believe what he had previously thought to be truth ‘by law established’; and this disbelief forced him to change his own hypotheses. What a magnificent contrast he makes with so many modern ‘scientists’! And many of the most revolutionary ideas in Physics (in particular) came from nothing more than intuition, dreams, superstition or ‘metaphysics’. Einstein’s and Jeans’ and others’ ideas all share a Kantian climate whereby time, space and causality are no longer assumptions, but the subjects of an examination.
So truth is to be found ‘wher’ere it may lie’. One of the most scandalous attempts at the prevention of thought pre-AGW was against Velikovsky. His fundamental notion is that the history of the earth is governed by catastrophic, not uniform events, as Darwin had argued. Accordingly – for Darwin is a religious icon to many – , and as we have seen with AGW, the Guardians moved in to shut him up. A typical weasel defence of this silencing was made by Bauer, summing up the affair. He tells us: Velikovsky’s claims were a priori invalid! Unfortunately, he does not say how he knows this independently of any argument from the ‘scientific’ authority he himself claims. Yet Velikovsky nonetheless obeyed the first command of science: He made his ideas available for public criticism .
Peer review maybe a handy tool for inspecting the depth of research/logical structure of the hypotheses, including possible tests (etc.). But AGW has cruelly exposed its limits. So, provided expressed ideas reach a certain standard of overall literacy (including logical cohesion), anyone is entitled to say anything. The only proviso is they are subject to criticism, the motor of all knowledge. Of course, ideally, the ideas should be new in some sense – scientific discovery is a creative act,and nothing is worse than boredom. It is the least probable hypothesis which is the most interesting. But: A cat may look at a king. Or a professor. More power to your elbow. In a free discussion, all points-of-view welcomed. Incidentally, this answers the question: Quis custodiet ipsos custodes? We do. All of us who willingly share our ideas.
Interesting how that will work. Inside the glass “expertise wall” around the reviewer site, raucus critiquing goin’ on by designated ‘Sperts. Outside the wall, the Watchers have their own hammer-and-tongs discussion going on. Occasionally producing a big enough “flash” or “boom” to require attention from inside the wall.
But where does each “review” start and end? If the “paper” is approved by the reviewers, then what? Entry into an “official” repository of PDFs??
Ain’t we got fun?!?!
I think while science is timeless there is a different spirit in any age. From this I think there are no problems to science. But in any age and any culture speaking wise people – Sokrates, Jesus, Omar Khayyam *) , G. Bruno, G. Galilei, N. Copernicus – were attacked by self-appointed gatekeepers. Some wise people haven’t speak, only written poems on the Cold Mountain rocks (Han=Shan) or 9 x 9 poems in the last days of life (Lao=Tsu).
I read, the Royal Academy of Science were convinced by Sir Robert Ball that communication with the planet Mars was a physical impossibility, because it would require a flag as large as Ireland, which it would be impossible to wave. That was in 1893.
Brian Josephson, Nobel Prize for Physics 1973, says on his homepage: “I am ‘endorsed’ for quant-ph, but find myself blocked when I try to cross-list a paper to make it visible to subscribers to that area, and to search in that area. My letters to the moderators and to Prof. Ginsparg concerning this blocking, which are ignored.”
I think it is obvious that the spirit of this age is determined by rationalism of the mind and so also in the spirit of science. This spirit mode is very different to other ages when science includes philosophy, astronomy, algebra, theology, medicine, jurisprudence and astrology.
I think the basis to be able to speak words like: ‘Keep standards high’ is the personal standing of single persons – like A. Watts – for the timeless ethical principles of science, and not new aristocratic claims on blocking or filtering so called amateurs.
If any publisher of truth feels pressed, he is not interested in truth, but in victory. But science knows no victory. Science is timeless.
The spirit of the age seems to change or to shift to other modes. Hierarchy structures crash down. New structures will build. Science is timeless.
*) “ I was unable to devote myself to the learning of this algebra and the continued concentration upon it, because of obstacles in the vagaries of time which hindered me; for we have been deprived of all the people of knowledge save for a group, small in number, with many troubles, whose concern in life is to snatch the opportunity, when time is asleep, to devote themselves meanwhile to the investigation and perfection of a science; for the majority of people who imitate philosophers confuse the true with the false, and they do nothing but deceive and pretend knowledge, and they do not use what they know of the sciences except for base and material purposes; and if they see a certain person seeking for the right and preferring the truth, doing his best to refute the false and untrue and leaving aside hypocrisy and deceit, they make a fool of him and mock him. “
Omar Khayyám (1048-1131)
V.
I think that your proposed concept has great merit. It is time for scientific journals to take the next step onto the Internet.
There is one particular practice that I would strongly suggest in addition to what other comments have proposed: I would require the submittal of all raw data, methods, computer code and other supplementary materials at the time of a paper’s submittal.
While making supplementary material available is a stated requirement of many (if not most) journals, few (none?) seem to be enforcing it. To me, this embodies the true measure of a scientist, the willingness to allow others to examine their work for errors, biases, oversights, or just plain stupidity. Without access to the underlying data used to reach a conclusion, how can any other scientist, mathematician, engineer or professional every hope to really evaluate the work?
Thanks for the opportunity to contribute to such a noble effort.
I think there is serious agreement to do it.
So, the big question is how to do it. I suggest we could use the ‘hunt the mammoth’ mentality.
The ‘tribe’ picks a few of the better hunters and maybe a few people who have some related experience, we ‘camouflage’ them by assigning an anonymous review number and send them in to throw spears while we watch and comment. A two tier level, one to choose the champions, and the second is the ones who go in and do the work. Then the first tier ‘cheer’ them on, maybe bringing up other points, and throwing in extra ‘champions’ as needed. Say we find somebody in the tribe who specializes in a certain field that is related, we add them to the mix. That would allow a wide group of reviewers, a larger pool of experience, while at the same time allowing us a few representatives of the group as a whole. Then there is a place where the common folk such as myself can input (tribe level) and maybe even bring up points the champions might not see (coaching from the sidelines) where as the champions, say, Monckton? McIntyre? Willis E? to throw spears as they see fit. Maybe adding a referee, say, Dr. Curry, to oversee, and you have a racous process that would allow a more complete review. We could even have the orginal paper writer input, defending the ‘mammoth’ (that might be a really bad idea though, but it could be fun.)
Another point:
For the initial test run of this or any process, we can practice. We don’t need to use a new, untested paper. We can use an existing paper and test it out. The papers exist, and has comments already. Take one that was reviewed, and review it and see which produces a better result.
Thoughts? Better ideas?
@ferd berple
“WUWT could open up a new standard in Peer Review 2.0 by allowing any and all review that speaks to the methods of the paper, and let the conclusions speak for themselves.”
I believe that this is the approach adopted by the Public Library of Science in evaluating papers for their journal Plos 1. If qualified reviewers judge the methods to be valid and the conclusions follow logically from the results, the paper is published.
But even in the evaluation of methods, which may include complex and controversial statistical and mathematical analyses, there will be sharp disagreements, which means that, ultimately, either someone has to make a judgment on what is fit to publish, or absolutely everything is published, in which case we can be sure that much and probably the vast majority of what is published will be, if not rubbish, then at least of minimal significance, even though the methods and the logic are sound.
The Web, however, could provide a means of sorting the near certainly futile from what might contain the germ of an important idea, or the intimation of a significant fact. For example, software could be devised that would attach something like a Google page rank to every publication, but weighted by time since publication and taking account of reader evaluations.
Reader evaluation could be more sophisticated than Amazon’s book review ratings (one to five stars). For example, the rating of a rater would be weighted according to the number and ranking of the weighter’s own publications.
This would not provide a route to absolute truth, but if might be instructive.
CanSpeccy,
Those concerns could be resolved by having a parallel post where anyone could comment. The Web 2.0 peer review article comments should be limited to actual peers with the requisite qualifications, such as being a previously published author, having a PhD in a related field, etc.
Oops, sorry again. The strain of devising an entire new automatic system of peer review overwhelmed a neural circuit.
That last sentence should have read:
For example, the rating of a rater would be weighted according to the number and ranking of the rater’s own publications.
Who is this Hugh Errors? He sounds like trouble to me.
The key issue in getting proper peer review is as follows:
1. Ensure that the field is not infected with group-think/religious rules. This happens from time to time in some fields. It doesn’t matter how many reviewers you have if all scientists are constrained.
2. Ensure that petty disputes/career struggles don’t impact on reviewing. It was well known in medical research 20 years ago that you had to name certain people who couldn’t be allowed to review your paper – they were submitting similar stuff/you had beaten them to it etc etc. If one in three says no, you’re done for. If it’s one in twenty, chances are you will carry the day.
So in general, the greater the number of reviewers, the bigger the likelihood of good reviewing.
But also, the greater the time spent by reviewers reviewing, which leaves them less time for actually doing research themselves.
Trial and error will bring you to an optimum, I suspect……
@Smokey
“Those concerns could be resolved by having a parallel post where anyone could comment”
Yes, there should be a comment thread associated with each article. Comments might be rated in the same way as the article, based on the author’s rating determined as outlined above.
This would not preclude comment by novices and outsiders, but it would make it possible to see at a glance if there is a consensus among the experts in the field.
The comment thread would also be the place where the author would be expected to rebut significant criticism, thus keeping much if not all of the relevant discussion in one place.
This proposal is definitely worth trying.
Rules on who can review, how many reviewers, how long the process remains open etc need to be established beforehand.
I would suggest that one or two reviewers ought to come from ‘outside’ the field of research of the paper under review. The reason for that is that often there is an ‘in-group’ language used which makes it impossible for those of other disciplines to understand. Thus we have the comments of various scientists outside climate science that they don’t really understand what is going on but trust the climate scientists to be right. That’s why we are where we are …
Anyway, go for it, there are lots of good proposals cas to how to proceed ollected here already.
OTOH, I’ve been considering a web 2.0 peer review experiment of my own. WUWT now has the ability to offer a peer review service for articles and papers. It is a new feature I can activate into WordPress, and would allow comments by invited reviewers to be posted for authors prior to publication, so that articles can be evaluated by a broad base of techical readers prior to publication.
I welcome readers thoughts on this idea. – Anthony
It might be a good idea. It might not. I’m not smart enough to predict the possible unintended consequences. The idea brought to mind this XKCD entry:
http://xkcd.com/927/
Not a perfect fit, but close enough, perhaps.
There again, probably worth trying out to see what happens.
Excellent, Anthony, so go for it! As Tallbloke says, ‘an idea whose time is come’. But… please don’t allow the pedants to clutter your proposal with ‘rules’ as the only rule should be that all participants must conduct themselves with brevity, grace and gentility.
Anthony, as one who has deliberated and written an article for submission to WUWT, then stalled, then started again ad nauseum, I would really welcome such a collaborative review process.
If my ideas are to be published and accepted or reviled, at least I would have the benefit of some learned feedback to at least provide confidence that what I submit is viable.
Wouldn’t it be great to be able to publish something knowing that most major bugs were eliminated (alternative to looking foolish to just a few rather than 100,000,000)
Bring it on I say
Warmest Regards
Andy
Start with invited reviewers, but allow the paper and the review discussion to be redable by all. Allow evryone to comment, but comments other than by the reviewers are held until approval. Allow a moderator or a subset of the invited reviewers to promote a comment to visible or to promote a comment author to be an aditional reviewer.
Also, requires that all data and software be available on the web before review starts and eplicitly ask that the reviewers to check that the posted materials are adequate.
I’ve been a reviewer for professional journals, and I have also had to endure review–sometimes enormously useful, sometimes irritatingly self serving. So I think I can speak on a few topics. I know many on this thread would hate to emulate the current system, but some aspects of it are unavoidable.
Journals usually have a chief editor and an editorial board. The editorial board are drawn from the ranks of experts over the range of topics the journal covers. When a paper arrives it is turned over to the appropriate editorial board member who then solicits help from people he/she knows to be area experts that are current with the topics and methods of the paper. The reviewers in turn should voluntarily decline if they have a conflict of interest, cannot do a timely job, or feel they lack the expertise to do a good job; and they would then suggest an alternate reviewer.
Some journals allow an author to suggest reviewers, and this is one practice I’d avoid.
Anthony, if you wish to pursue this further you probably can put together an editorial board from among the people who regularly visit this site, and from those who are willing to reveal their real names and credentials. People do not need advanced degrees, but they do have to be credible in some way.
Someone suggested a print version, but print is hugely expensive. The journals we think of, Science, Nature, JGR, GRL, etc, have stiff page charges (often waved, but often paid by research grants) that you’d be unlikely to collect. Some journals have advertising, and charge very large subscriptions to libraries and institutions. Print is just a headache.