Every so often an academic paper appears that claims to analyze a problem while quietly defining the outcome in advance. The recent PNAS paper by Mosleh et al., “Divergent patterns of engagement with partisan and low-quality news across seven social media platforms,” is a textbook example.
Abstract
In recent years, social media has become increasingly fragmented, as platforms evolve and new alternatives emerge. Yet most research studies a single platform—typically X/Twitter, or occasionally Facebook—leaving little known about the broader social media landscape. Here, we shed light on patterns of cross-platform variation in the high-stakes context of news sharing. We examine the relationship between user engagement and news domains’ political orientation and quality across seven platforms: X/Twitter, BlueSky, TruthSocial, Gab, GETTR, Mastodon, and LinkedIn. Using an exhaustive sample, we analyze all (over 10 million) posts containing links to news domains shared on these platforms during January 2024. We find that news shared on platforms with more conservative user bases is significantly lower quality on average. Turning to engagement, we find—contrary to hypotheses of a consistent “right-wing advantage” on social media—that the relationship between political lean and engagement is strongly heterogeneous across platforms. Conservative news posts receive more engagement on platforms where most content is conservative, and vice versa for liberal news posts, consistent with an “echo platform” perspective. In contrast, the relationship between news quality and engagement is strikingly consistent: Across all platforms examined, a given user’s lower-quality news posts received higher average engagement, even though higher-quality news is substantially more prevalent and garners far more total engagement across posts. This pattern holds when accounting for poster-level variation and is observed even in the absence of ranking algorithms, suggesting that user preferences—not algorithmic bias—may underlie the underperformance of higher-quality news.
https://www.pnas.org/doi/10.1073/pnas.2425739122
The authors announce, right up front, that they are studying “news quality,” political lean, and engagement across seven platforms. What they never seriously do is justify what “quality” means. Instead, they import it wholesale from an ideological ecosystem that has already decided which voices are acceptable.
Their own description is telling:
“We measure the quality of the news source linked to in each post using a ‘wisdom of experts’ approach in which ratings from a variety of fact-checkers, journalists, and academics are aggregated…”
This sentence does nearly all the work in the paper. It sounds neutral, technocratic, and authoritative. It is none of those things.
There is no definition of ideological diversity among these “experts,” no attempt to measure disagreement, and no adversarial testing. The paper simply assumes that journalists, professional fact-checkers, and academics constitute a politically neutral reference class. Anyone familiar with the modern media-academic complex knows that assumption is indefensible.
Worse, the authors openly admit that they are not measuring accuracy at all. Instead, they rely on reputation as a stand-in:
“We followed a standard practice in the literature and used the reliability of the publisher as a proxy for accuracy of content.”
A proxy is not a measurement. And this proxy commits a fundamental category error: individual claims are judged not by whether they are true, but by whether the institution publishing them has been blessed by the correct set of gatekeepers. A factual article from a disfavored outlet is permanently “low quality.” A false article from a prestige outlet remains “high quality” by definition.
Accuracy never enters the model.
One of the primary sources feeding these domain ratings is NewsGuard, an organization that has moved well beyond fact-checking into open policy advocacy, government partnerships, and content policing. Treating NewsGuard scores as an epistemic baseline is not neutral. It is ideological outsourcing.
Yet the paper anticipates this criticism and waves it away:
“The strong correlation between political leaning and source quality we observe… is unlikely to be the result of ideological bias among fact-checkers…”
This is not an empirical conclusion. It is a declaration of trust.
The authors argue that “politically balanced crowds” produce similar ratings, implying this somehow proves neutrality. But political balance does not equal epistemic independence. A Democrat and a Republican who both consume the same legacy media, trust the same institutions, and defer to the same authorities do not magically cancel out shared priors.
Once “quality” has been defined this way, the paper’s headline results become tautological. The authors report:
“Lower-quality news domains are shared more on right-leaning platforms…”
Translated into plain English, this means: platforms populated by people skeptical of mainstream institutions tend to link to outlets disliked by mainstream institutions. That is not a discovery. It is a restatement of the setup.
Political lean itself is classified using GPT-4:
“To measure political lean, we used GPT-4o and asked it to rate domains…”
A large language model trained primarily on mainstream journalism and academic literature is being used to label ideological bias. The authors then “validate” these labels against other commonly used measures—measures built from the same institutional inputs. This is not validation; it is ideological echo.
The paper’s central behavioral claim is that users engage more with “low-quality” content than “high-quality” content, even when controlling for the user:
“Across all platforms examined, a given user’s lower-quality news posts received higher average engagement…”
This is framed as evidence that misinformation is inherently more engaging. But the authors quietly concede a far less flattering explanation for the institutions they favor:
“An important contributor appears to be comparatively low engagement rates of posts linking to The New York Times, The Wall Street Journal, The Washington Post, USA Today, and Reuters…”
In other words, elite legacy outlets perform poorly. Users are not stampeding toward fringe conspiracy sites; they are disengaging from institutions that have become repetitive, moralizing, and predictably wrong on too many major issues to count.
Rather than take this as evidence of institutional fatigue or earned distrust, the authors reach for familiar psychological clichés:
“This pattern suggests an underlying reason simply might be user preference—e.g., for novel, negative, or moralizing content…”
Notice what is never considered: that some content labeled “low quality” might be accurate, insightful, or correct earlier than elite consensus allowed. That possibility would require interrogating the rating system itself, which the paper treats as sacrosanct.
The study’s confidence in its framework is further undercut by its funding disclosure:
“We acknowledge funding support from the Open Society Foundation.”
This is not a moral indictment. It is contextual information. The Open Society Foundation has invested heavily in misinformation research, platform governance, and content moderation. A paper that defines quality via activist-aligned institutions, finds dissent engaging, and frames that engagement as a problem fits neatly within that agenda.
What the paper actually demonstrates—despite its intentions—is that institutional authority no longer guarantees attention. Users are selective. They are skeptical. They are increasingly uninterested in being told what is true by organizations that spent years insisting certainty where uncertainty reigned, and silence where debate was warranted.
A genuinely skeptical study would have examined claim-level accuracy. It would have tracked which “low-quality” claims later proved correct. It would have tested ideological variance among raters rather than asserting neutrality by credential. It might even have entertained the heretical notion that elite consensus is sometimes wrong.
This paper does none of that. Instead, it constructs a closed epistemic loop, defines disagreement as low quality, and then expresses concern that disagreement is popular.
The real finding is not about misinformation. It is about the collapse of deference—and the quiet panic of institutions that mistook authority for truth.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Well … it actually self-identifies as the PPNAS, the Prestigious Proceedings of the National Academy of Sciences.
It’s not “Piss Poor?”
Pee Pee, Not Actually Science. It’s a garbage can of fake woke bull shank. Significantly low quality, and totally self-inflicted. The entire enterprise is a waste of space and time, an epidemic brain infection driven by the collapse of modern academia into a clown show.
“low quality news” Hmm? Is there any other kind?
Okay. So I’m not a social media type. I am 99.9997% not familiar with these
platforms: X/Twitter, BlueSky, TruthSocial, Gab, GETTR, Mastodon, and LinkedIn.
I take exception also to the word “news.”
These are not the droids we are looking for. Move along. Move along.
I, too, avoid social media. It classified as entertainment, then there is no issue, but it is not.
I have looked at a few. The moment I see any “suggested for you” links, I immediately uninstall that app. For the same reason I stopped buying newspapers: Like hell am I going to pay for the privilege of being propagandised.
With that said, a vast majority of “alternative” sources pandering to the so called Right Conservative, are of very high quality, well written and presented by very professional operators, who are extremely adapt at disguising their Bolshevik propaganda as patriotism.
As an example, I offer a multitude of engaging, intelligent and apparently well educated people on this site, hankering for “Small Government” completely oblivious that they get their philosophy from the very same people who have been absorbing social functions into the stock market, where public property goes into the pocket of the biggest shareholder, and the operational debt remains with the taxpayer, while the actual service (health, education, infrastructure) degrades to the point where, well, we are now. Because Boeing/GE/insert whatever fund 8runs your local water supply, turned into financial institutions that believe capitalism creates, so they replaced engineers with bean counters.
Now watch them pile on me for insulting their favorite dumbass economic model…
Nationalise all infrastructure, or continue on the road to poverty and slavery, and re educate the stupid who still think privatization has solved anything.
P. S. To Commentors:, Before you shout, Please do not exhibit your ignorance by calling me a socialist, unless you can get your head around the fact that your government’s economic policy is your state religion, communism is not atheist but theocracy, and terrorism is, by definition, a function of government.
And I would love a real world example where privitisation has improved anything for the consumer over the long run. Name me just one…
Name me just one
I’m guessing you don’t remember the break up of the telephone monopolies?
Yeah, I defer to you for that one, in my country, privatising the state Telco has opened the door to fraud, failure and high prices, while my taxes still cary the infrastructure via subsidies, tax breaks, VAS fraud byand creative accounting.
By the way, when they “broke up” your monopolies, they were already privately owned, no? If so, then irrelevant to issue at hand, if not direct proof of need to nationalize.
But yeah, how exactly did you experience real benefit in the long run, as in to this day? Do you get anything from your Telco that is not expensive, half assed and subsidized to the hilt?
I find it hard to differentiate “nationalized” vs. “privately owned government-protected monopoly”, given that they are effectively the same thing.
Prices dropped and options increased afterward. Things changed massively with the introduction of cell phones so I hardly think we can compare today’s options to those of 40 years ago.
You seem to have the opinion that nationalization is better than privitazation? Would that apply to cell phones and providers? How about grocery stores and retail stores?
“Low quality”? Like the New York Times or The Guardian?
High Quality = self declared authoritative Media(= followers of the great narrative)
Low quality = everything that goes against the great narrative, or questions it
High Quality means, using Ukraine as example.
Russias army is weak, unorganized, underequipped and it took them 4 years to take 1% of Ukraine while they wanted to take Ukraine in 3 days
and that Russians have lost a million Soldiers.
Low quality means to point out that no arms race is needed if Russia is so weak.
That they can never beat Ukraine and if they do so that it’d take the Russians 400 years.
That Russia never claimed to take Ukraine in 3 days and that this nonsense originates from the USA – in this case General Mark Milley.
And that the Russian military can not have lost 1 million soldiers when we take a look at the
corpse exchange rate, and this is on average 30+Ukrainians for 1 Russian body.
It is very simple to tell high quality from low quality :
High quality needs censorship and doesn’t get fact checked by high quality fact checkers.
There is nothing surprising here. When you have lost the argument scientifically and observationally you instinctively attack those who disagree with you and their favored source of information. Yet more proof that our grant money is being wasted.
‘wisdom of experts’=consensus
‘wisdom of experts’=oxymoron
“wisdom of experts” = opinions.
Agreement of opinions = consensus
As Richard Feynman would say,
Human behavior expert Chase Hughes claims that if an opinion cannot be voiced, you’re involved in a psy-op or a religion! In either case thought control is being used to suppress heretical ideas!
One thing should be stated loudly and clearly in the New Year; scientific inquiry REQUIRES skepticism! The Venn diagram of Scientism and Skepticism is two non-convergent circles! Without critical thought and skepticism, one can NOT advance beyond being an advocate or an ideologue! With them you wind up a free-range, feral human; much despised by corporate elites the world over!
Happy New Year to the freethinkers of WUWT!
Science makes progress by building on/correcting mistakes.
In the studies of climate, there has been no real progress.
In the areas of physical sciences, engineering, and associated disciplines, progress may have been made.
‘Wisdom of experts’ is an oxymoron.
I saw your post too late.
I posted the same above.
Excellent critique, Charles.
Those authors just demonstrate their own biases.
They use the term echo chamber, but with no self-awareness.
Should have said:
Apparently, echo chambers are invisible from inside.
Resistance is futile. You will be assimilated.
Control the language, control the ideas.
Control the media, control the ideas.
George Orwell missed one. Not really.
A paper that is 97% bullshit perhaps?
Unbelievable! Total crap.
I remember when one of the “big boys” told me at a conference that if I published in PNAS and like journals he would have known about my work.
You say of “The strong correlation between political leaning and source quality we observe… is unlikely to be the result of ideological bias among fact-checkers…”: This is not an empirical conclusion. It is a declaration of trust.
I would say it is more than that: The paper assumes a strong correlation between political leaning and source quality.
In other words, the whole thing is circular logic. OK, you said effectively that in the article, but maybe saying it explicitly could help. Thanks anyway for an excellent article, I hope you had a great Christmas, and 2026 is surely going to be the year that the scam collapses. Isn’t it?