Cut-throat academia leads to ‘natural selection of bad science’, claims study
Ralph Westfal submits this story Via the Guardian
Scientists incentivised to publish surprising results frequently in major journals, despite risk that such findings are likely to be wrong, suggests research.
Getting stuff right is normally regarded as science’s central aim. But a new analysis has raised the existential spectre that universities, laboratory chiefs and academic journals are contributing to the “natural selection of bad science”.
To thrive in the cut-throat world of academia, scientists are incentivised to publish surprising findings frequently, the study suggests – despite the risk that such findings are “most likely to be wrong”.
Paul Smaldino, a cognitive scientist who led the work at the University of California, Merced, said: “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”
The paper comes as psychologists and biomedical scientists are grappling with an apparent replication crisis, in which many high profile results have been shown to be unreliable. Observations that striking a power pose will make you feel bolder, smiling makes you feel happy or that placing a pair of “big brother” eyes on the wall will protect against theft have all failed to stand up to replication.
Sociology, economics, climate science and ecology are other areas likley to be vulnerable to the propagation of bad practice, according to Smaldino.
Smaldino cites an experiment by the American psychologist Daryl Bem, who purported to show that undergraduates could predict the future and published the result in a prestigious journal.
“What he found was the equivalent of flipping a bunch of pennies, nickels, and quarters, asking students to guess heads or tails each time, and then reporting that psychic abilities exist for pennies, but not nickels and quarters, because the students were right 53% of the time for the pennies, rather than the expected 50%. It’s insane,” said Smaldino. “Bem used exactly the same standards of evidence that all social psychologists were using to evaluate their findings. And if those standards allowed this ridiculous a hypothesis to make the cut, imagine what else was getting through.”
Yes, imagine. Full story at the Guardian
In a paper published earlier this year, Smaldino sums up the problem:
Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase “publish or perish” hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side.
“Over time the most successful people will be those who can best exploit the system,” Paul Smaldino, a cognitive science professor at University of California Merced, says. To Smaldino, the selection pressures in science have favored less-thanideal research: “As long as things like publication quantity, and publishing flashy results in fancy journals are incentivized, and people who can do that are rewarded … they’ll be successful, and pass on their successful methods to others.”
Many scientists have had enough. They want to break this cycle of perverse incentives and rewards.