Peer review falls for recycled manuscripts

Margaret writes in tips and notes:

More about the failure of peer review— or more precisely its inconsistency in producing reliable assessments of the value of the submitted article

http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6577844

Abstract

A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

0 0 votes
Article Rating
59 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
May 29, 2013 12:35 am

No way!

Jonathan Abbott
May 29, 2013 12:48 am

Well it is psychology we’re talking about.

RokShox
May 29, 2013 1:03 am

The only thing reviewed was the name of the submitting institution. No chance for backscratching -> reject.

kadaka (KD Knoebel)
May 29, 2013 1:06 am

So with the “correct” names, institutions and individuals, papers will pass peer review and be published.
Without those “correct” names, using fake ones that aren’t recognized, the same papers are now tragically flawed, rejected for publication.
What does this tell us about the peer review system?
It tells us editors are terrible at detecting plagiarism! They didn’t recognize papers their own journals had published. Don’t these editors read their own journals?

Margaret Hardman
May 29, 2013 1:14 am

The paper of which this is the abstract was published in behavioural And Brain Sciences in May 1982.

David Schofield
May 29, 2013 1:15 am

1982 paper!?

Tim
May 29, 2013 1:21 am

Table of Contents – 1982 – Volume 5, Issue 02 (Special Symposium Issue)
I suspect the findings of this paper might be considered out of date, given that the research was carried out before I was born, and the peer review process was probably carried out using pen, paper and envelopes. Perhaps a case of falling for recycled news?

pesadia
May 29, 2013 1:28 am

IKnow I shouldn’t be, but I am shocked by this article.
Is there any more information available.
What is the opposite of, “you have made my day?”

rk
May 29, 2013 1:32 am

Did anyone at WUWT note the kerfuffle over the rogoff and reinhart paper on sovereign debt? They mad their data public, and sure enough some energetic grad student found a mistake in their XLS sheet. How embarrassing…esp. for paper that was massively covered in the press.
Interestingly, the main finding that gdp growth was under 2 percent w/ debt/gdp over 90 percent was changed to 2.5%. They used medians in this analysis…which tells me that their data is pretty noisy and has a large variance. (as i understand it, tho, others have found the same thing…about 2 percent with heavy debt loads)

Andrewmharding
Editor
May 29, 2013 1:49 am

A better description of papers,recycled or not, supporting the AGW hypothesis is “cronyism”. Peer review implies a degree of respectability, which it most certainly does not have, since the hypothesis is clearly wrong!

Admin
May 29, 2013 1:51 am

If there is value to this 30 year old article, it would rest with someone who repeated the experiment today to see if there is any noticeable difference.
If we only have this paper as a data point, you can hand wave that things are getting better or getting worse, or that psychology, perhaps one of the most subjective of published sciences, is much more susceptible to this kind of favoritism, which we know exists elsewhere, but to what degree?

tallbloke
May 29, 2013 2:11 am

Tim says:
May 29, 2013 at 1:21 am
I suspect the findings of this paper might be considered out of date, given that the research was carried out before I was born, and the peer review process was probably carried out using pen, paper and envelopes. Perhaps a case of falling for recycled news?

The ability to miss the point is strong in some. Tell me Tim, whilst plagiarism is far more easily spotted these days, how much do the think the institutional bias this paper highlights has changed. If you think it has changed, give us your reasons.

David L.
May 29, 2013 2:20 am

I wonder how well Mann’s hockey stick paper would fair in resubmission.

DirkH
May 29, 2013 2:21 am

rk says:
May 29, 2013 at 1:32 am
“Did anyone at WUWT note the kerfuffle over the rogoff and reinhart paper on sovereign debt?”
Yes of course. The Krugmanites/Modern Monetarists need every excuse they can get to continue justifying Weimarian policies. It doesn’t matter, as the end result is known.

Dodgy Geezer
May 29, 2013 2:29 am

You call this a paper?
I have just made a modest contribution to the sum total of human knowledge; publishing a paper in the comments column of the Guardian. If anyone is interested I reproduce it here – detailed data available to bona fide researchers on request (but not if you’re looking to find something wrong with it!)
Metropolitan University of Nether Wallop
Behavioural change amongst Global Warming activists
D Geezer – Faculty of Advanced Chemistry and Roadside Catering
Global Warming seems to be having a hard time these days. The heady days of new papers proving without doubt that Global Warming is an immediate danger seem to have become rather thin on the ground, and in their place a new topic of discussion has arisen. This can best be described as a ‘Call to the Faithful’ – an exhortation not to lose heart as the science collapses around their ears. Running questionable surveys attempting to show that the hypothesis is still valid is one example of this type of behaviour. But can this opinion be backed up with evidence?
I have just conducted a short Cook-type survey to examine this phenomenon. The methodology was as follows:
1 – find an independent presenter of Global Warming news stories. I picked ‘Climate Debate Daily’ – a site which provides one pro and one anti item each day.
2 – examine and caregorise the ‘pro-global warming’ stories into three headings, using the abstracts provided. I picked the following categories:
a) – a story providing New Data on global warming (typically reports of technical papers)
b) – a story emphasising Solidity of Belief in global warming (typically reports of pro-global warming activity)
c) – Other Stories (often comment on political or industrial activity)
Note that the New Data do not have to support Global Warming theory – they just have to be stories providing new information. In practice, most of the stories under this heading actually indicated that the threat was smaller than had been assumed.
The results for the most recent 30 are as follows:

New Data – 6
Encouraging Belief – 16
Other – 8

Following the Cook methodology, I dropped the ‘Other’ figure. The percentages then become (rounded to my error bars):

New Data about Global Warming – 25%
Encouraging the faithful – 75%

Thus it is shown that the Global Warming industry spends three times as much effort on preventing people leaving the faith as it does on showing that the faith is correct. Which supports the thesis at the beginning of this piece…

FerdinandAkin
May 29, 2013 2:30 am

Let me guess, the one paper that made it through a second peer review was originally written by Psychologist, Diederik Stapel, of Tilburg University.

May 29, 2013 2:37 am

Old news. Paradigm paralysis – if you’re within the reigning paradigm, you’re safe. If you dissent, you’re in for some trouble.
http://en.wikipedia.org/wiki/Paradigm#Paradigm_paralysis

LevelGaze
May 29, 2013 2:46 am

The only astonishing thing about this is that a 30 year old article is still paywalled (and it might not have been in the first place).

May 29, 2013 2:47 am

When the peer-pal reviewers see the required funding stream buzz words….
the gatekeepers cross the subliminal honesty barrier….
they then let the mono tribe droids pass….hence….
“climate change” invokes kum-baya vibes and peer-pal approval.

Ian H
May 29, 2013 3:06 am

I can trump your 30 year old article on recycled papers with a 1 year old article on a paper consisting entirely of computer generated nonsense being accepted
http://thatsmathematics.com/blog/archives/102
It is worth reading for the extremely amusing referees report.
However this isn’t as surprising as you might think the paper seems to have gotten through only in the extreme garbage end of the academic publishing business. These people are really little more than scam artists looking to make money. The idea is to make up a plausible sounding journal name, set up a website, and send spam to what looks like every academic on the planet touting for submissions. Anyone silly enough to submit an article to a journal they found out about purely by spam is probably going to be silly enough to also pay the $500 “processing charge” which is requested once the article gets (inevitably) accepted.

Fred
May 29, 2013 3:19 am

You are aware that this paper is over 30 years old, right?

Shevva
May 29, 2013 3:19 am

It’d take some sort of expert to understand this kind of mind set.
I here theres a bloke in Oz good at getting to the bottom of things.

StephenP
May 29, 2013 3:33 am

Shouldn’t all papers submitted remain anonymous until they have passed peer review?
There seems to be a large amount of mutual ‘back scratching’ going on, as is reputed to be the case in any form of competitive ‘sport’.

Aynsley Kellow
May 29, 2013 3:47 am

StephenP: The important thing here is that these journals did not employ blind, let alone double blind review. One of the things that first concerned me about the quality of climate science was learning that several of the journals publishing papers did not blind the identity of authors to the reviewers. Some even (as do many journals) asked for authors to suggest reviewers. Easy for the editors, but very undermining of the QA value of peer review, and much easier for paradigms to be defended by circling the intellectual wagons. The problems have been exacerbated since the publication of this paper by two aspects of ‘globalisation’: cheap air travel since the introduction of the jumbo jet and e-mail, which have both meant the couple of dozen experts in specialized areas of knowledge are likely to be personally known to each other.

May 29, 2013 3:51 am

“Publish or perish” has a whole new meaning when the publishers play a role as “gatekeepers” to the accepted norms, and the accepted hierarchy, and the status quo.
The fact this paper is dated doesn’t mean the problem doesn’t still exist.
Rejection slips are terribly crushing to young, hopeful writers, whether they work in the arts or the sciences. Those who stick to their vision of truth need to learn to be tough, and how to live on a low income while working other jobs, while those who dance to the tunes of the gatekeepers compromise truth, become dupes, and are often looked upon in the manner people look upon Bill McKibben.

May 29, 2013 4:39 am

You shouldn’t refer to something from 1982…

Txomin
May 29, 2013 4:40 am

Yep, this paper is a classic. Things are far, far, far worse now.

Edwin C
May 29, 2013 4:54 am

Peer review is not confined to academic papers, it is a standard method of quality control used in software development. The results are very similiar to those found with acedemic papers, it depends how it is done. If the organisation requires that someone else ‘reviews’ the work, but does not specify a methodolgy it usually results in ‘pal review’ on a mutual assistance basis. You are not going to make life difficult for your co-workers in case they respond in kind. In this case it is bearly better than useless as a method of finding errors. In an organisation where you have no control over who the reviewer is and there is a defined methodology for the review process the results are far better. In all cases problems will still exist with what has been developed. Fortunately software development normally has adequate testing processes, something our friends in climate science are unable to achieve to any real extent.

JJB MKI
May 29, 2013 5:13 am

Hardman
May 29, 2013 at 1:14 am
“The paper of which this is the abstract was published in behavioural And Brain Sciences in May 1982.”
Yes, and haven’t things improved since that time?
“will keep them out somehow — even if we have to redefine what the peer-review literature is!”
– Phil Jones email to Michael Mann.

starzmom
May 29, 2013 5:27 am

It’s worse with law review articles, which are student edited and only vetted to make sure that a cited source says what the author says it says. There is NO evaluation of the accuracy of the material presented. I’m sure it helps if the author is highly placed–a chaired position at a prestigious law school, for example. They seem to be the worst for putting stuff out without evaluation of the accuracy of the facts, knowing of course that no one else will vet it. I tried to lead a charge at my law school, with the journal I worked on, to improve the vetting for factual accuracy. No one seemed to care.

Doug Huffman
May 29, 2013 6:07 am

pesadia says: May 29, 2013 at 1:28 am … What is the opposite of, “you have made my day?”
Gimme back my minutes. So much to read, so little time.
In re sovereign currency, see Bitcoin, as I struggle to implement the various clients/protocols on my new-to-me linux.
Believe nothing that one reads or hears without verifying it oneself unless it fits ones preexisting worldview.

Doug Huffman
May 29, 2013 6:12 am

About nonsense accepted, I pray all here are familiar with the Sokal Affair, for me, the opening shots in the Science Wars. Physicist Alan Sokal wrote the essay ‘Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity’ that was accepted by journal Social Texts before he revealed his hoax. There is a large literature developed from the affair.

chris y
May 29, 2013 6:25 am

It is interesting that this article is from 1982, when volunteer reviewers had much more time on their hands to carefully review submitted manuscripts. Back in the early 1990’s, I had first-hand exposure to the pal review involved in pushing a manuscript onto the tenure-invoking, grant-raising pages of Science.
My suspicion is that the situation today is far worse than I thought. For example:
“I mean peer review is an utterly corrupt, ignorant stupid, mad system that we’ve created it’s just that we haven’t come up with anything better.
But let’s understand what peer review is:
Peer review is not about checking the validity of data
Peer review is not about reproducibility of data
Peer review is a check on acceptability, acceptability in the scientific community.
And that’s why I think you will still need editors, because our job is to be awkward. To say, even though you’ve got three or four peer reviewers who don’t like this, and don’t want it published. To hell with them, we’re still going to publish it. Because it still says something interesting. It looks interesting, even though it’s not accepted by this tiny group of people who we call peer reviewers.”
Richard Horton, editor in chief of Lancet, 02/04/13
Yes, that is 2013, as in this year, folks.
Yes, that is the editor in chief of Lancet, a renowned medical journal, folks.

May 29, 2013 7:25 am

Academic peer review is absolutely hit and miss. You get the whole spectrum from scrupulously fair, detailed, and helpful, over superficial and negligent, to dismissive one-liners and competitively motivated hatchet jobs. Just as one might expect from other experience with humanity at large.

wws
May 29, 2013 8:24 am

So it’s from 1982. The Principia was published in 1687, that doesn’t mean its no longer valid. Apparently some think that anything done before the invention of the I-phone doesn’t count.

climatologist
May 29, 2013 8:39 am

Alas, I once got rejected because of a reviewer’s statement: “He is not a member of the xxx community”. The editor in this case was guilty.

Gary Pearse
May 29, 2013 8:51 am

“eight of the nine were rejected. ”
Well, within error bars, that’s 80% rejected. It looks like this “80%” is arrived at by just random rejection.

dp
May 29, 2013 10:07 am

For those distracted by the date of publishing there is a follow-up that is equally damning and so spot on regards the contemporary state of things it would be considered confirmation bias by Slick Lew:
http://cogprints.org/5179/

May 29, 2013 10:37 am

@Dodgy Geezer –
On a trip to England in 1979 I visited Nether Wallop, and do not recall seeing a university there. Am I correct in assuming there is still no university there?
Seriously, one wonders if some of the institutions spewing the AGW twaddle rather ought to be called the University of Sphincter, rather than by their present names.

May 29, 2013 11:20 am

David L. says:
May 29, 2013 at 2:20 am
I wonder how well Mann’s hockey stick paper would fair in resubmission.
______________________________________________________
In climate science, you’d be accused of plagiarism since everybody in the (climate science) world has seen it.
However, it would be very interesting if the underlying PCA analysis was applied to non-climate data (red noise perhaps!) and submitted in a field other than climate science. Preferably a field that is known for strong statistics.

tty
May 29, 2013 11:46 am

Double-blind reviews isn’t quite the panacea it is sometimes made out to be. In fields with very many active scientists it might work. In more narrow specialties (like mine) it won’t. It would be very easy to guess who is author, and it is relatively easy to identify the referees. I can’t remember ever writing a paper where I couldn’t identify at least one of the referees, and sometimes all of them.

Margaret Hardman
May 29, 2013 12:08 pm

@dp
Your reference was published in 1982 as well. Anyone got evidence of peer review in the current century beyond some personal communications?

rabbit
May 29, 2013 1:24 pm

One would hope, perhaps desperately, that current web-based search facilities would make the results of such a test today a little more flattering to the peer review process.

rabbit
May 29, 2013 1:33 pm

I do peer review at a rate of three or four papers a year, plus short reviews (gradings really) of conference abstracts.
Much of peer review has nothing to do with the validity of the results, but rather with the organization of the paper, quality of writing, clarity of exposition, conformity to standards, whether the graphs are properly annotated, and so on. Reviewers do not have the resources to check up on the correctness of the results. If the author claims he or she did this and got that result, we pretty much have to take their word for it.
Thus an incorrect or fraudulent paper could easily pass peer review. I’m sure I could pull it off myself.

Kuze
May 29, 2013 1:58 pm

Here’s another paper that’s worth reading for those interested in the one above:
Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System
http://people.stern.nyu.edu/wstarbuc/Writing/Prejud.htm
Long story short: scientists find the conclusion of a paper more influential than its method

J Martin
May 29, 2013 2:02 pm

I wouldn’t be surprised if the same tests were conducted today, the results would be the same.

Margaret Hardman
May 29, 2013 2:07 pm

@Kuze
Your link goes back to 1977. My question earlier was to find something more recent that lets us know the current situation rather than using assertion or anecdote.

John M
May 29, 2013 4:50 pm

margaret,
Perhaps a better question is for you to find evidence that things have changed.
After all, like legal precedence, unless someone publshes something to the contrary, the research stands.
(If I were the preaching type, I might say “That’s how Science works!)

Txomin
May 29, 2013 6:44 pm

y
Yes, editors are largely to blame.

Aynsley Kellow
May 29, 2013 7:45 pm

“And that’s why I think you will still need editors, because our job is to be awkward. To say, even though you’ve got three or four peer reviewers who don’t like this, and don’t want it published. To hell with them, we’re still going to publish it. Because it still says something interesting. It looks interesting, even though it’s not accepted by this tiny group of people who we call peer reviewers.”
Richard Horton, editor in chief of Lancet, 02/04/13
I guess ‘ Because it still says something interesting’ might explain why Horton published rubbish like that on Pusztai’s potatoes, on which a Royal Society review concluded (excuse the Wikipedia):
‘that Pusztai’s experiments were poorly designed, contained uncertainties in the composition of diets, did not have a large enough number of rats, used incorrect statistical methods and lacked consistency within experiments. Pusztai responded by saying they had only reviewed internal Rowett reports, which did not include the design of the experiments or methodology used.’
The Lancet also withdrew the 1998 paper by Andrew Wakefield on MMR and autism only after this:
‘On 28 January 2010, a five-member statutory tribunal of the GMC found three dozen charges proved, including four counts of dishonesty and 12 counts involving the abuse of developmentally challenged children.’
And then there is the estimate of deaths from the Iraq conflict by opinion poll published in
The Lancet that found 601,027 deaths (range of 426,369 to 793,663) — far in excess of the estimates of the Iraq Body Count project, the UN or the Iraqi Health Ministry.
There is a serious corruption of the peer review process if editors start publishing things they find interesting – especially since those interests are likely to influence the choice of reviewers as well as decisions based upon their reviews.
As Carl Sagan once put it, ‘Where we have strong emotions, we’re liable to fool ourselves.’

Margaret Hardman
May 29, 2013 10:22 pm

I’m not the one saying peer review does not work. Besides, I am asking is there anything more up to date, now that we have more modern ways of communicating and publishing.

dp
May 29, 2013 11:42 pm

Margaret Hardman says:
May 29, 2013 at 12:08 pm
@dp
Your reference was published in 1982 as well. Anyone got evidence of peer review in the current century beyond some personal communications?

Did you miss the last update was 2011? The article is considered timely as of that date. We also know thanks to climate gate that the peer review system in climate science is fubar in totality and we have that from peer reviewers and those who work withing the peer review system.

Margaret Hardman
May 30, 2013 2:10 am

@dp
I think the last update refers to the admin on the site (ie when did they last check the listing and its details, rather than an update of the paper. I’ve read the paper, it’s not pretty but it is a review article rather than new research. And it is commenting on the first paper’s findings (I’ve read that too and the comments that arose from it) so it would be unlikely to get an actual update but if you have a link or reference to an updated version of the paper then I would welcome the chance to read it.
Once again I would welcome links or sources for new research on peer review rather than assertions and anecdotes. Can anyone provide any?

John M
May 30, 2013 3:14 pm

Poor Ms. Hardman can’t get anyone in this unruly class to do her homework assignment.
Anyone else reminded of this?

Margaret Hardman
May 30, 2013 3:30 pm

John M
No, doesn’t remind me of anything. Perhaps you were unlucky enough to have a teacher like that. Perhaps you were unlucky enough to be the boy in the class wanting to learn surrounded by those who would rather disrupt the learning and come to regret it later. Not quite sure why you chose to search YouTube rather than for papers on peer review from this century but I can assure you there are some things out there and they aren’t hard to find. Haven’t checked them myself but I have found some titles. The “homework” is a bit of an accidental test to see how skeptical those who proclaim that epithet really are. Do you really want to find things out or do you just, as someone else commented, only want to know things that confirm your worldview? Not you personally, John, but the followers of this site in general. I didn’t plan it that way but the request has been largely ignored which may mean nothing but suggests at least that it hasn’t been done.
Since the lid of the box labelled “Personal Insults” has now been lifted, I expect more. That is the pattern, as on eristic sites dotted all over the Net.

John M
May 30, 2013 3:44 pm

Actually Margaret, I found this last night, but I wondered how prissy you wanted to get about your “homework assignment”.
Funny, it took me about 2 minutes of googling, but I guess that was too hard for you, an esteemed “educator”.
http://onlinelibrary.wiley.com/doi/10.1002/asi.22784/full
You can see that the preponderance of literature reviewed points out various types of bias. The Review authors try their best at “false balance”, but basicaly, it’s pretty clear that there is a large body of literature reporting on bias in the peer reviewed literature area.
I guess another experiment we’ve just done is confirmed an extreme deficiency among strident CAGWers.
You wouldn’t by any chance have had some trouble finding your way around southewestern Arizona? You seem to lack a sense of Yuma.

Steve Garcia
May 30, 2013 3:56 pm

What an indictment of a system.
What it does imply is that any review at all is doing nothing but excluding work, work that may have some value.
Better no system than this system
BTW: This wouldn’t be possible in climate science, where everybody knows everybody.
Steve Garcia

dp
May 30, 2013 9:58 pm

Margaret Hardman says:
May 30, 2013 at 2:10 am
Once again I would welcome links or sources for new research on peer review rather than assertions and anecdotes. Can anyone provide any?

Did you miss the climate gate mail where the gate keepers swore to do what was needed to prevent papers from being published? In their own words, by their own hand. I’m not doing your homework, either.

Margaret Hardman
May 30, 2013 10:22 pm

Had no trouble finding my way around Google or Arizona. I found the right Page.