How a PNAS Study Confuses Institutional Authority with Truth

Every so often an academic paper appears that claims to analyze a problem while quietly defining the outcome in advance. The recent PNAS paper by Mosleh et al., Divergent patterns of engagement with partisan and low-quality news across seven social media platforms,” is a textbook example.

Abstract

In recent years, social media has become increasingly fragmented, as platforms evolve and new alternatives emerge. Yet most research studies a single platform—typically X/Twitter, or occasionally Facebook—leaving little known about the broader social media landscape. Here, we shed light on patterns of cross-platform variation in the high-stakes context of news sharing. We examine the relationship between user engagement and news domains’ political orientation and quality across seven platforms: X/Twitter, BlueSky, TruthSocial, Gab, GETTR, Mastodon, and LinkedIn. Using an exhaustive sample, we analyze all (over 10 million) posts containing links to news domains shared on these platforms during January 2024. We find that news shared on platforms with more conservative user bases is significantly lower quality on average. Turning to engagement, we find—contrary to hypotheses of a consistent “right-wing advantage” on social media—that the relationship between political lean and engagement is strongly heterogeneous across platforms. Conservative news posts receive more engagement on platforms where most content is conservative, and vice versa for liberal news posts, consistent with an “echo platform” perspective. In contrast, the relationship between news quality and engagement is strikingly consistent: Across all platforms examined, a given user’s lower-quality news posts received higher average engagement, even though higher-quality news is substantially more prevalent and garners far more total engagement across posts. This pattern holds when accounting for poster-level variation and is observed even in the absence of ranking algorithms, suggesting that user preferences—not algorithmic bias—may underlie the underperformance of higher-quality news.

https://www.pnas.org/doi/10.1073/pnas.2425739122

The authors announce, right up front, that they are studying “news quality,” political lean, and engagement across seven platforms. What they never seriously do is justify what “quality” means. Instead, they import it wholesale from an ideological ecosystem that has already decided which voices are acceptable.

Their own description is telling:

“We measure the quality of the news source linked to in each post using a ‘wisdom of experts’ approach in which ratings from a variety of fact-checkers, journalists, and academics are aggregated…”

This sentence does nearly all the work in the paper. It sounds neutral, technocratic, and authoritative. It is none of those things.

There is no definition of ideological diversity among these “experts,” no attempt to measure disagreement, and no adversarial testing. The paper simply assumes that journalists, professional fact-checkers, and academics constitute a politically neutral reference class. Anyone familiar with the modern media-academic complex knows that assumption is indefensible.

Worse, the authors openly admit that they are not measuring accuracy at all. Instead, they rely on reputation as a stand-in:

“We followed a standard practice in the literature and used the reliability of the publisher as a proxy for accuracy of content.”

A proxy is not a measurement. And this proxy commits a fundamental category error: individual claims are judged not by whether they are true, but by whether the institution publishing them has been blessed by the correct set of gatekeepers. A factual article from a disfavored outlet is permanently “low quality.” A false article from a prestige outlet remains “high quality” by definition.

Accuracy never enters the model.

One of the primary sources feeding these domain ratings is NewsGuard, an organization that has moved well beyond fact-checking into open policy advocacy, government partnerships, and content policing. Treating NewsGuard scores as an epistemic baseline is not neutral. It is ideological outsourcing.

Yet the paper anticipates this criticism and waves it away:

“The strong correlation between political leaning and source quality we observe… is unlikely to be the result of ideological bias among fact-checkers…”

This is not an empirical conclusion. It is a declaration of trust.

The authors argue that “politically balanced crowds” produce similar ratings, implying this somehow proves neutrality. But political balance does not equal epistemic independence. A Democrat and a Republican who both consume the same legacy media, trust the same institutions, and defer to the same authorities do not magically cancel out shared priors.

Once “quality” has been defined this way, the paper’s headline results become tautological. The authors report:

“Lower-quality news domains are shared more on right-leaning platforms…”

Translated into plain English, this means: platforms populated by people skeptical of mainstream institutions tend to link to outlets disliked by mainstream institutions. That is not a discovery. It is a restatement of the setup.

Political lean itself is classified using GPT-4:

“To measure political lean, we used GPT-4o and asked it to rate domains…”

A large language model trained primarily on mainstream journalism and academic literature is being used to label ideological bias. The authors then “validate” these labels against other commonly used measures—measures built from the same institutional inputs. This is not validation; it is ideological echo.

The paper’s central behavioral claim is that users engage more with “low-quality” content than “high-quality” content, even when controlling for the user:

“Across all platforms examined, a given user’s lower-quality news posts received higher average engagement…”

This is framed as evidence that misinformation is inherently more engaging. But the authors quietly concede a far less flattering explanation for the institutions they favor:

“An important contributor appears to be comparatively low engagement rates of posts linking to The New York Times, The Wall Street Journal, The Washington Post, USA Today, and Reuters…”

In other words, elite legacy outlets perform poorly. Users are not stampeding toward fringe conspiracy sites; they are disengaging from institutions that have become repetitive, moralizing, and predictably wrong on too many major issues to count.

Rather than take this as evidence of institutional fatigue or earned distrust, the authors reach for familiar psychological clichés:

“This pattern suggests an underlying reason simply might be user preference—e.g., for novel, negative, or moralizing content…”

Notice what is never considered: that some content labeled “low quality” might be accurate, insightful, or correct earlier than elite consensus allowed. That possibility would require interrogating the rating system itself, which the paper treats as sacrosanct.

The study’s confidence in its framework is further undercut by its funding disclosure:

“We acknowledge funding support from the Open Society Foundation.”

This is not a moral indictment. It is contextual information. The Open Society Foundation has invested heavily in misinformation research, platform governance, and content moderation. A paper that defines quality via activist-aligned institutions, finds dissent engaging, and frames that engagement as a problem fits neatly within that agenda.

What the paper actually demonstrates—despite its intentions—is that institutional authority no longer guarantees attention. Users are selective. They are skeptical. They are increasingly uninterested in being told what is true by organizations that spent years insisting certainty where uncertainty reigned, and silence where debate was warranted.

A genuinely skeptical study would have examined claim-level accuracy. It would have tracked which “low-quality” claims later proved correct. It would have tested ideological variance among raters rather than asserting neutrality by credential. It might even have entertained the heretical notion that elite consensus is sometimes wrong.

This paper does none of that. Instead, it constructs a closed epistemic loop, defines disagreement as low quality, and then expresses concern that disagreement is popular.

The real finding is not about misinformation. It is about the collapse of deference—and the quiet panic of institutions that mistook authority for truth.

5 9 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

Subscribe
Notify of
6 Comments
Inline Feedbacks
View all comments
Scarecrow Repair
December 25, 2025 6:11 pm

Well … it actually self-identifies as the PPNAS, the Prestigious Proceedings of the National Academy of Sciences.

Scissor
Reply to  Scarecrow Repair
December 25, 2025 8:45 pm

It’s not “Piss Poor?”

John Hultquist
December 25, 2025 6:59 pm

“low quality news” Hmm? Is there any other kind?
Okay. So I’m not a social media type. I am 99.9997% not familiar with these
platforms: X/Twitter, BlueSky, TruthSocial, Gab, GETTR, Mastodon, and LinkedIn. 

Tom Halla
December 25, 2025 7:18 pm

“Low quality”? Like the New York Times or The Guardian?

Bob
December 25, 2025 8:09 pm

There is nothing surprising here. When you have lost the argument scientifically and observationally you instinctively attack those who disagree with you and their favored source of information. Yet more proof that our grant money is being wasted.

December 25, 2025 8:32 pm

‘wisdom of experts’=consensus

Verified by MonsterInsights