Measuring Censorship in Science Is Challenging. Stopping it Is Harder Still

By Musa al-Gharbi Nicole Barbaro

December 14, 2023

In a new paper for the Proceedings of the National Academy of Sciences, we, alongside colleagues from a diverse range of fields, investigate the prevalence and extent of censorship and self-censorship in science.

Measuring censorship in science is difficult. It’s fundamentally about capturing studies that were never published, statements that were never made, possibilities that went unexplored and debates that never ended up happening. However, social scientists have come up with some ways to quantify the extent of censorship in science and research.

For instance, statistical tests can evaluate “publication bias” – whether or not papers with findings tilting a specific way were systematically excluded from publication. Sometimes editors or reviewers may reject findings that don’t cut the preferred direction with the preferred magnitude. Other times, scholars “file drawer” their own papers that don’t deliver statistically significant results pointing in the “correct” direction because they assume (often rightly) that their study would be unable to find a home in a respectable journal or because the publication of these findings would come at a high reputational cost. Either way, the scientific literature ends up being distorted because evidence that cuts in the “wrong” direction is systematically suppressed.

Audit studies can provide further insight. Scholars submit identical papers but change things that should not matter (like the author’s name or institutional affiliation) or reverse the direction of the findings (leaving all else the same) to test for systematic variance in whether the papers are accepted or rejected and what kinds of comments the reviewers offer based on who the author is or what they find. Other studies collect data on all papers submitted to particular journals in specific fields to test for patterns in whose work gets accepted or rejected and why. This can uncover whether editors or reviewers are applying standards inconsistently that shut out perspectives in a biased way.

Additionally, databases from organizations like the Foundation for Individual Rights and Expression or PEN America track attempts to silence or punish scholars, alongside state policies or institutional rules that undermine academic freedom. These data can be analyzed to understand the prevalence of censorious behaviors, who partakes in them, who is targeted, how these behaviors vary across contexts, and what the trendlines look like over time.

Supplementing these behavioral measures, many polls and surveys ask academic stakeholders how they understand academic freedom, their experiences with being censored or observing censorship, the extent to which they self-censor (and about what), or their appetite for censoring others. These self-reports can provide additional context to the trends observed by other means – including and especially with respect to the question of why people engage in censorious behaviors.

One thing that muddies the waters, however, is that many scholars understand and declare themselves as victims of censorship when they have not, in fact, been censored.

For instance, rejection from a journal for legitimate reasons, such as poor scientific quality, is not censorship – although there could be censorship at play if the standards reviewers and editors hold papers to varies systematically depending on what authors find and which narratives the paper helps advance.

Likewise, it’s not censorship if your work, upon publication, is widely trashed or ignored. No one is entitled to a positive reception.

Granted, peer responses to a paper may be unfair or a product of unfortunate biases. A hostile response to particular findings may dissuade other scholars from publishing similar results. And the reception of published work can have career implications for scholars: well-received works can be career enhancing, while poorly-received works have the opposite effect. Nonetheless, there is no censorship at play unless one’s scholarship is prevented from publication, or there are campaigns post-publication to punish the author for their study (through formal or informal channels) or have the work retracted or suppressed.

Work ignored upon publication has not been censored either. The overwhelming majority of published research receives few reads, even fewer citations (especially if we exclude self-citations), and makes no meaningful impact on the world. This is the outcome people should generally expect for their scholarship, for better or for worse. If someone experiences the modal result for their published work (it gets ignored), this should not be assumed to be a product of unjust bias. And even where there is  “dissemination bias” at play (systematic variance in whether papers are read, shared, cited or receive media coverage based on whether they advance or undermine a particular narrative), this is an importantly different problem from censorship.

Likewise, it’s not censorship if scholars engage others in mocking, disrespectful or uncharitable ways and are generally greeted with hostility in turn. There are many “crybullies” in the culture war space who characterize reasonable pushback to their own aggressive behaviors as political persecution.

Nor is it censorship if scholars advocate for a particular position while violating academic rules and norms and these violations result in censure. Such punishments could approach censorship if standards are enforced inconsistently. It would likewise be censorious for people to try to dig up dirt on the author of a publication they disliked to have them punished for ostensibly unrelated offenses, or to have spurious investigations launched to make their lives miserable.

It is also necessary to distinguish between self-censorship that arises from real and highly costly threats versus self-censorship driven by cowardice or inaccurate information. Often there is plenty of room for people to dissent from prevailing views without significant adverse consequences, but scholars refuse to speak out regardless because they because they misperceive the magnitude or likelihood of sanction, or because they are unwilling to incur even mild risks to speak their minds (although we often compare ourselves to the likes of Galileo, in fact, higher ed may have unusually high concentrations of cowards, conformists and careerists). These aren’t instances of censorship where other people are the problem. The problem in these cases is largely in the mind of the self-censor.

By carefully working through the best available data on censorship in science, sifting genuine cases of suppression from culture war chaff, some general patterns emerge.

One of the most striking patterns is how often censorship is driven by scientists themselves.

Typically, when people think or talk about censorship we imagine external authorities (like governments or corporations), or perhaps campus administrators or overzealous students. We often understand censors to be driven by ignorance, ideological authoritarianism, or a desire to suppress findings that are inconvenient for someone’s political project or bottom line.

In fact, censorship and self-censorship seem to be most typically driven by prosocial motives. Sometimes scholars self-censor or suppress findings because they worry that claims will be easily misunderstood or misused. Sometimes they self-censor and instruct their advisees to do the same out of a desire to avoid creating difficulties for their colleagues and students. Sometimes findings seem dangerous or unflattering to populations that are already stigmatized, vulnerable or otherwise disadvantaged, and scientists suppress findings out of a desire to avoid making their situation worse (although, in practice, censorship often ends up having the most dramatic and pernicious effects on these very populations).  

Critically, it isn’t just censorship that works this way. Many other academic problems tend to be driven by prosocial motives as well.

As psychologist Stuart Ritchie demonstrates in Science Fictions (Metropolitan Books, 2020), academics who commit fraud often seem genuinely convinced that the narratives advanced by their papers are, in fact, true. Fraud is often motivated, in part, by a desire to amplify what scientists believe to be the truth when their experiments fail to provide the expected confirmatory data. In other cases, scholars are convinced that a new treatment or intervention can help people, but they feel like they need eye-popping results to draw attention or secure funding for it – leading them to either massage the data or overhype their findings.  

And as Lawrence Lessing shows in America, Compromised (University of Chicago Press, 2018), it is often scholars who are sincerely committed to honesty and rigor who end up being corrupted – and it is precisely their high sense of integrity that often blinds people to the ways they end up compromising their work.

This is precisely what makes many problems with the state of science difficult to address. They often aren’t caused by bad scientists driven by evil motives but by researchers trying to do the right thing in ways that ultimately undermine the scientific enterprise.

To reduce censorship and self-censorship, it’s not enough to create robust protections for academic freedom. We must also convince scientists to use those freedoms to follow the truth wherever it leads and to tell the truth even when doing so seems to conflict with other priorities.

This article was originally published by RealClearScience and made available via RealClearWire.

4.6 19 votes
Article Rating
15 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
December 30, 2023 12:49 am

Many scientific and technological fields related to climate, health, energy, environment, agriculture, transportation, …, (all crucial for humankind) have been hijacked by a totalitarian agenda supported by institutions infested with unelected phony Malthusians, nostalgic Marxists and crony captalists such as the UN, WHO, NASA, EPA, FDA, CDC, WMO, WEF, the Club of Rome, most European institutions, …, and this agenda is actually implemented in most of the countries by corrupt governments.

Reply to  Petit-Barde
December 30, 2023 1:54 am

s/Many/Most/g

Reply to  Petit-Barde
December 30, 2023 8:34 am

I strongly agree and censorship of honest scholarship is seen as a feature rather than a flaw by the protagonists of the totalitarian agenda. We should see the censorship as more of a symptom than the central problem. The real issue is how the wrong people gained control of institutions (political, bureaucratic, financial, academic etc.) that we rely on to keep society moving forward. Our survival as a successful society relies on our finding the solutions and acting on them. We’ve ignored the rot for far too long and have accepted the propaganda as soothing reassurance that everything is fine when it is not.

strativarius
December 30, 2023 1:48 am

“”Measuring censorship in science is difficult. “”

try starting with Covid1984 – examples abound and now…

“”Arise, Sir Lockdown…

An outspoken Covid scientist who pushed for longer and tougher lockdowns is to be knighted in the New Year’s Honours.””
https://www.dailymail.co.uk/news/article-12910405/Lockdown-Outspoken-Covid-scientist-knighted-New-Years-Honours-list-tycoon-jailed-Guinness-scam.html

Reply to  strativarius
December 30, 2023 1:57 am

Every single Knight of the Realm I ever met was a criminal, or a liar, or both.
.
Same goes for most of the Nobel ‘Peace Prize’ awards

malala-and-greta
strativarius
Reply to  Leo Smith
December 30, 2023 2:22 am

I wouldn’t give them the time of day

Reply to  Leo Smith
December 30, 2023 8:38 am

Absolutely true. I have gradually learned over time that many awards in our current socio-political environment are signs of compliance with the authoritarian dogma, and not indicative of individualism and integrity. I have worked beside researchers in the medical field who were celebrated with national awards when the academic conduct I witnessed first hand was inept and dishonest, and their published output without merit.

December 30, 2023 3:55 am

“… debates that never ended up happening…”

Like the debate over the climate.

pillageidiot
December 30, 2023 6:27 am

Trofim Lysenko and the Soviet Academy of Sciences certainly knew how to do censorship right!

Lysenko was appointed the director of the Institute of Genetics so he could quash all efforts to show that inherited genetics actually were controls on plant growth.

If you did NOT agree with his theories, your papers were not published. If you spoke out about that censorship, you lost your job. If you continued to speak out, you were sent to the brutal gulag. If you did not meekly go to the camps, they just shot you.

That is how you control the narrative on a scientific topic!

/sarc mode off

(I am a worried that our scientific “betters” are starting to admire the efficiency of the Soviet’s methods of scientific debate.)

insufficientlysensitive
December 30, 2023 7:58 am

We must also convince scientists to use those freedoms to follow the truth wherever it leads and to tell the truth even when doing so seems to conflict with other priorities.

Oh, snap. Who is this, to lecture scientists on what the scientific method is? We need to convince the media gossipers to learn it themselves.

John Oliver
December 30, 2023 9:53 am

Censorship – we used to have robust culture of free speech anti censorship and of course legal constitutional precedent and foundational principles( based on natural rights) in the United States and most of the western culture nations.

What the hell happened? The self censorship is especially insidious. I have found my self doing this : you think “ do I really want to risk it” especially if the blow back could cost you and your family job career or put them in the cross hairs of some really disgusting people and groups out there. And sometimes one just thinks practically “ maybe we just need to survive to fight another day”

I just didn’t think this could happen, here in my life time. There is a rotten side to humanity.

Janice Moore
December 30, 2023 10:09 am

1. Cui bono

a desire to suppress findings that are inconvenient for someone’sbottom line. Richard Lindzen has pointed this out repeatedly. Before scientists were rewarded for promoting the “climate change” scam, there was relatively little funding for climate science. Now, “climate change” is a way to make money.

2. CORRECTION:

researchers trying to do the [what they believe to be the] right thing in ways that ultimately undermine the scientific enterprise ARE “bad scientists.”

December 30, 2023 10:10 am

From the article: “As psychologist Stuart Ritchie demonstrates in Science Fictions (Metropolitan Books, 2020), academics who commit fraud often seem genuinely convinced that the narratives advanced by their papers are, in fact, true.”

i don’t think this would apply to the temperature data mannipulator, Michael Mann and his Climategate Cronies. They knew their temperature data mannipulations were not true, but they presented them as representing reality. Pure fraud. And not in service of the “greater good”.

John Oliver
December 30, 2023 10:55 am

Yes the funding that follows this stuff is a huge problem and it spills into not only research but obviously the massive techno climate industrial complex. I’ve tried to point out to some of my debating rivals that the cult of “ magical exotic technological ( unneeded) solutions can suck up massive amounts of investment capital which is not an unlimited resource despite what many economic illiterates think.

These false assumptions then get telegraphed through the media and can bankrupt individual and institutional investors alike as well as taxpayers through a massive misallocation of resources . It is a catastrophic chain reaction- and we are at ha ha the “ tipping point”

December 31, 2023 11:07 am

Any field of science that includes a fraud like Piltdown Mann is not science. Or it has no self respect.

False manipulation then hiding of data is the antithesis of science.

And yet he remains in place.