Every Leading Large Language Model Leans Left Politically

By Ross Pomeroy

Large language models (LLMs) are increasingly integrating into everyday life – as chatbots, digital assistants, and internet search guides, for example. These artificial intelligence (AI) systems – which consume large amounts of text data to learn associations – can create all sorts of written material when prompted and can ably converse with users. LLMs’ growing power and omnipresence mean that they exert increasing influence on society and culture.

So it’s of great import that these artificial intelligence systems remain neutral when it comes to complicated political issues. Unfortunately, according to a new analysis recently published to PLoS ONE, this doesn’t seem to be the case.

AI researcher David Rozado of Otago Polytechnic and Heterodox Academy administered 11 different political orientation tests to 24 of the leading LLMs, including OpenAI’s GPT 3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, and Twitter’s Grok. He found that they invariably lean slightly left politically.

“The homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy,” Rozado commented.

This raises a key question: why are LLMs so universally biased in favor of leftward political viewpoints? Could the models’ creators be fine-tuning their AIs in that direction, or are the massive datasets upon which they are trained inherently biased? Rozado could not conclusively answer this query.

“The results of this study should not be interpreted as evidence that organizations that create LLMs deliberately use the fine-tuning or reinforcement learning phases of conversational LLM training to inject political preferences into LLMs. If political biases are being introduced in LLMs post-pretraining, the consistent political leanings observed in our analysis for conversational LLMs may be an unintentional byproduct of annotators’ instructions or dominant cultural norms and behaviors.”

Ensuring LLM neutrality will be a pressing need, Rozado wrote.

“LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.”

Source: Rozado D (2024) The political preferences of LLMs. PLOS ONE 19(7): e0306621. https://doi.org/10.1371/journal.pone.0306621

This article was originally published by RealClearScience and made available via RealClearWire.

5 15 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

79 Comments
Inline Feedbacks
View all comments
altipueri
August 17, 2024 2:17 am

And Microsoft has just changed the terms of its service agreement to say you should not rely on its AI.

https://www.theregister.com/2024/08/14/microsoft_services_agreement_update_warns/

Microsoft is notifying folks that its AI services should not be taken too seriously, echoing prior service-specific disclaimers.

In an update to the IT giant’s Service Agreement, which takes effect on September 30, 2024, Redmond has declared that its Assistive AI isn’t suitable for matters of consequence.

“AI services are not designed, intended, or to be used as substitutes for professional advice,” Microsoft’s revised legalese explains.

David Wojick
Reply to  altipueri
August 17, 2024 3:15 am

These are liability issues.

strativarius
Reply to  David Wojick
August 17, 2024 3:53 am

See Sue, Grabbit & Runne.

Rational Keith
Reply to  David Wojick
August 17, 2024 1:51 pm

Of course, but hypocrisy – pushing it onto customers but not taking responsibility for quality.
Oh Keith, that is the outfit you criticize for bloating its product with cutesiness while not fixing defects. 😉

Reply to  altipueri
August 17, 2024 4:14 am

From a historical point of view, do not trust Microsoft.

Walter Sobchak
Reply to  SasjaL
August 17, 2024 5:49 am

Always sage advice.

Reply to  altipueri
August 17, 2024 1:43 pm

All of them misunderstand questions sometimes and give an incorrect answer. Double-check anything of importance.

Rational Keith
Reply to  altipueri
August 17, 2024 1:49 pm

While pushing it onto customers.
‘Microsloppy’ I call it, or worse.

NickR
Reply to  altipueri
August 17, 2024 5:03 pm

Hallucinations, as they are called, happen as often as 30% of the time with no indication that the response is made up garbage and are likely not fixable with this gen of “AI” (advanced search is a better name)

Bill Toland
August 17, 2024 3:00 am

When Chatgpt first came out, I gave it a few test questions to gauge its credibility. Every answer it gave was discredited nonsense. Then I noticed that two of its trusted sources were the BBC and Wikipedia. I haven’t used it since.

Reply to  Bill Toland
August 17, 2024 11:58 am

It has been my experience that if you call out the AI and state that it is wrong, it won’t argue with you and readily admits that what it said is wrong. The problem is, if one is not knowledgeable enough to know when they are being gas lighted, they will go away thinking that they have been told the truth. The second problem is that the ephemeral ‘truth’ only exists for that particular exchange. It ‘forgets’ what it has acknowledged as the truth as soon as you terminate the exchange. It may be designed that way on purpose! It can propagandize users, but to avoid liability for lying, readily admits to making a ‘mistake’ when they are caught with their hand in the proverbial cookie jar.

Reply to  Clyde Spencer
August 17, 2024 12:45 pm

Related:
https://www.foxnews.com/lifestyle/this-day-history-august-17-1945-george-orwells-animal-farm-published.
Someone made this comment:

If you’ve been around more than twenty years NOW and moving forward, or maybe witnessed subtle, yet abrupt changes of the way the world reinterpreted itself, compared to how each society defines itself within it. Here’s to offering only ONE basic realistic question, without any reference to the Biblical scripture nor suggesting any thoughts of Orwell’s notable works.

Has the evolving world condition(s) currently happening, Really any better NOW compared to ten, twenty, thirty, or even forty years ago ?

Perhaps consider, then compare some other relative Orwell quotes of those/these days.

“The further a society drifts from the truth, the more it will hate those who speak it.”

“He who controls the past controls the future. He who controls the present controls the past.”

“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.”

~ George Orwell, 1984

P.S. Thanks again FOX for remembering yet another REAL date in history, an important man and a pragmatic view of the future. Also, most importantly, thanks for supporting this platform, essentially representing and exercising inalienable right(s) to all fundamental freedoms of speech, expression and believing in a responsibly free and accurate press for all.

This was my reply:

“Fahrenheit 451” ranks right up there also.

But it is a bit dated. People need information to form their own opinions and make decisions. Now it’s much easier just censor the internet and/or use AI to control what information is available to people.

PS 451*F is the temperature at which paper burns, according to the book.

I’ll add that in that book the “Firemen” were the ones sent out to burn “misinformation”.

Reply to  Clyde Spencer
August 17, 2024 2:11 pm

I use Claude.ai and I can save the chats so I can add on to them and it will still have the old information as the context.

Reply to  scvblwxq
August 17, 2024 7:19 pm

That works for you, but what about the naive individual who is looking for information on the climate debate and makes the mistake of trusting the liberal interpretation of the other LLMs and doesn’t realize that they he is being fed misinformation?

Reply to  Clyde Spencer
August 17, 2024 7:56 pm

On one of my chats with Claude I’ve gotten it a bit skeptical about “climate change” although that will be lost when I end the chat.

Rational Keith
Reply to  Bill Toland
August 17, 2024 1:57 pm

A major problem with Wikipedia is bis by authors of articles. It had to finally block one insistent climate alarmist.
In general articles are good to get leads, but may omit key factors and often are verbose so challenging to get essentials out of.
I just jabbed Wikipedia for leaving the elected Chiefs out of its article on the opposition of hereditary chiefs to the Coastal Gas Link pipeline in NW BC. The former want the pipeline for its economic benefits to their members and of access roads to reach services including health care.
You can sign up to edit articles, but you have to learn the formatting needed (it is plain text, perhaps has a WYSIWYG UI, but you have to get everything correct to its protocols).

Reply to  Rational Keith
August 17, 2024 7:22 pm

It has been my experience that articles on physics and math have been usable. However, for things like climatology, much less so. I have been solicited for donations and subsequently had email exchanges with the founder. He denies any bias. He is blind to the obvious.

Reply to  Bill Toland
August 17, 2024 3:22 pm

I used one once. I was unable to find what I wanted using various search terms. Possibly entirely my fault. I tried the Ai search. It wrote a short paragraph in reply. It was immediately apparent there had been no meeting of the minds.

1saveenergy
August 17, 2024 3:07 am

“AI services are not designed, intended, or to be used as substitutes for professional advice,”

Climate models (or models of any sort ) are not designed, intended, or to be used as substitutes for empirical data.

But they are used daily, (as Ai will be), to make fortunes for the few to the cost of the rest of us.

Rational Keith
Reply to  1saveenergy
August 17, 2024 2:07 pm

😉

David Wojick
August 17, 2024 3:13 am

My guess is a lot of the creators are liberals and so are the training materials especially academic lit and mainstream press. But if the bias is “slight” it may not be much of a concern.

Reply to  David Wojick
August 17, 2024 6:56 am

How exactly is “lean slightly left politically” measured?

JBP
Reply to  karlomonte
August 17, 2024 10:57 am

If the LLM does not call for the death penalty but only life imprisonment of AGW skeptics, then it only leans slightly left

Curious George
Reply to  David Wojick
August 17, 2024 7:11 am

It is not creators. It is training data. Democrats control mainstream media almost completely. AI is all about “a lie repeated 1000 times becomes the truth” [Dr. Goebbels].

Joe Crawford
Reply to  Curious George
August 17, 2024 7:53 am

Don’t forget Robert Conquest’s Second Law of Politics: “Any organization not explicitly right-wing sooner or later becomes left-wing.”

Reply to  Joe Crawford
August 17, 2024 3:30 pm

And often the “explicitly right-wing” do the same, but maybe more slowly.

Rational Keith
Reply to  Curious George
August 17, 2024 2:09 pm

I say it is both.
You often encounter poorly written software.
Recall WUWT has had problems with WorsePress software it uses to host this forum.
In one case Anthony had to confront WorsePress with evidence to get it to fix a long-standing problem.

Ron Long
August 17, 2024 3:16 am

From my experience managing a research group (processing digital data for mining exploration), when the owner wanted to experiment with AI, it didn’t work. The first learning datasets, intentionally simple, worked, but as the complexity increased it tended to follow one path and resist balanced input. We abandoned the effort and utilized experienced expert’s input. My experience said that simple repetitive processes work well with AI, but AI cannot sort out complexity, especially when potentially exposed to bad data input.

Rational Keith
Reply to  Ron Long
August 17, 2024 2:13 pm

What’s needed is quality software and methods to help humans analyze data.
Stephen McIntyre is renowned for finding botches in statistics calculations by climate alarmists.
(His background is mineral exploration.)
Climate Audit
Hitting hard this year: “In today’s article, I will report on detective work on Esper’s calculations, showing that the article is not merely a trick, but a joke.”

Reply to  Rational Keith
August 17, 2024 3:59 pm

A basic assumption is those reconstructions, and the critique of same, reported on Climate Audit, is that small parts of a degree of temperature in any calculation of average have any significance who-so-ever in the real world. Who am I to say otherwise?. But I do. The entire exercise if futile beyond pointing out that every “observer” has a different viewpoint.

strativarius
August 17, 2024 3:24 am

A case of that infamous lefty invention…unconscious bias?

I’m sure it is anything but.

Rational Keith
Reply to  strativarius
August 17, 2024 3:14 pm

Well, there may be a contribution from the _subconcious_ mind.
It is processing using stored data and values, then pops up answers.
But its answers need to be validated by the conscious mind – data may be out of date, values may be wrong.
But catastrophists evade the need.

(Sounds somewhat like ‘artificial intelligence’? 😉
And the old computing maxim: Garbage In = Garbage Out.

August 17, 2024 4:32 am

Realities left bias showing again 😀

Reply to  MyUsername
August 17, 2024 8:37 am

Virtual Realities [sic] left bias showing again

FIFY

Mr.
Reply to  Fraizer
August 17, 2024 2:51 pm

Imagined realities.

Tom Johnson
August 17, 2024 4:48 am

A while back I had an exchange with “Co-pilot. I asked “When will the present interglacial, the Holocene, end, returning us to the Ice Age. The answer was essentially: “We don’t know, but “some scientists think it will be as long as 125,000 years”. No matter how I asked the question, the answer always was similar, with the same ending. It never gave a reference for the 125,000 year number, even when asked. After a half dozen, or so, tries it came back with “Your time is up”.

Several times I have asked questions of a technical nature where I knew the correct answer. Co-Pilot was always wrong.

Bruce P
August 17, 2024 4:49 am

It’s likely that the “training” leans heavily on Wikipedia, which is entirely corrupted by left-wing editors. It would be very hard to use the general internet for any kind of science information as the vast majority of posters on the internet are non-scientists.

As far back as 2008 it was proved by articles reporting here on WUWT that the Google algorithms suppressed searches on ClimateGate. Pseudoscience rules the internet.

The training makes the AI. It has no other sources. It can’t look at the sky or peer into a microscope.

This makes it much less useful than a simple Duckduckgo search. At least with a search, if you get an answer that seems to have a strong bias you can see the name of the site. You don’t end up getting child-rearing advice from CandyVans dot com.

August 17, 2024 5:25 am

re: “Every Leading Large Language Model Leans Left Politically

The common factor (or denominator as the saying goes) may just be -wait for it- STUPIDITY!

Or raw, unbridled youth (lacking life’s experience) if one prefers something not so abrasive.

Bob Rogers
August 17, 2024 5:33 am

Since news media leans left it seems like any LLM trained with news archives would lean left.

Using ChatGPT for any business purposes is dangerous because the click through license holds you fully responsible for their legal fees if they get sued for your use of the service.

Reply to  Bob Rogers
August 17, 2024 9:59 am

IMHO the “click through license” that you reference would not be deemed to be lawful, and thereby enforceable, upon review by any human judge.

Walter Sobchak
August 17, 2024 5:48 am

How much do you want to bet that National Review, the Hoover Institutions, and First Things are not part of their training data?

The programs are the product of Gen Z residents of San Francisco. They are pig ignorant of history and philosophy and totally certain of their own virtue and the obvious correctness of their own grab bags of belief. (Can a boy have XX chromosomes? Can a girl have a penis and testicles?). what do you expect the product of such minds to look like?

1saveenergy
Reply to  Walter Sobchak
August 19, 2024 1:22 pm

“(Can a boy have XX chromosomes? Can a girl have a penis and testicles?)”

Yes & Yes

According to Blackless, Fausto-Sterling et al., 1.7 percent of human births might be intersex, including variations that may not become apparent until, for example, puberty, or until attempting to conceive. 1.7%. is an outlier, but a significant one.

Approx 2% of newborns have an extra finger or toe.

1saveenergy
Reply to  1saveenergy
August 19, 2024 3:27 pm

Sorry, didn’t post the link …
https://www.researchgate.net/publication/11812321_How_Sexually_Dimorphic_Are_We_Review_and_Synthesis

Also, 1 in 100,000 are born as true hermaphrodites (individuals who have both testicular and ovarian tissues).

ferdberple
August 17, 2024 5:54 am

https://chatgpt.com/share/c1900098-a623-48ee-b26a-5118cf44ab69

Chatgpt analysis of Canada’s carbon tax.

Reply to  ferdberple
August 17, 2024 11:57 am

Doesn’t tell us anything we didn’t already know, it was always about the money/virtue not about the climate.

ferdberple
August 17, 2024 6:04 am

Deleted

Coach Springer
August 17, 2024 7:24 am

Could the models’ creators be fine-tuning their AIs in that direction, or are the massive datasets upon which they are trained inherently biased? Rozado could not conclusively answer this query.”

Allow me. Yes.

August 17, 2024 7:53 am

Just as young children tend to adopt the views and values of their parents these language models are being schooled by the widespread biases in society’s literature and media. This should be no surprise. The term “Artificial Intelligence” is, in my opinion, misapplied. It is not intelligence we have built but simply complex data aggregation and analysis that reflects all the preconceived ideas present in the input data. “Learning Language Models” is a much more appropriate term as it describes exactly what is happening and doesn’t imply that the “learning” is the same thing as discovering truth.

Walter Sobchak
Reply to  Andy Pattullo
August 17, 2024 8:44 am

The rule of GIGO still applies.

For the whippersnappers, GIGO is the acronym for Garbage In, Garbage Out.

Reply to  Walter Sobchak
August 17, 2024 9:54 am

Willie Soon describes GIGO for climate models as, Garbage in, Gospel Out.

Erik Magnuson
Reply to  Pat Frank
August 17, 2024 10:23 am

I first heard that version of GIGO in the 1980’s in reference to computer models in general – especially models that had not been validated.

August 17, 2024 8:02 am

I suspect that the evolution of AI will follow behaviours described by chaos and complexity theory and Darwinian theories. Any monoculture will be extremely vulnerable to invasion by a different meme, especially when it proves to be sub optimal. There is no guarantee that the evolutionary process will always represent an advance. There are possible outcomes that would be very bad for humanity.
The best defence is keeping yourself and your friends and relatives educated. At the present level of development it is not difficult to find fault with AI outputs. There are some areas where AI can produce useful outputs, but these are mostly either extensions of KBS or computer control systems for plant etc..

August 17, 2024 8:14 am

The problem with LLMs is that the data base from which they derive their answers is totally corrupted with the leftist propaganda from over a hundred years. The AI programmers can’t see the bias because it permeates every aspect of human interactions so they believe it must be true.

The only hope for a useful AI would be using data sets that only include empirical information. That would exclude models and “feelings”.
I get frustrated with the blatant lies, misinformation, and basically anything coming from the leftist establishment/government promoted statements that are designed to push emotional responses.

FKH

mleskovarsocalrrcom
August 17, 2024 8:16 am

AI is nothing more than a complex search engine. If the programmers don’t like certain data vaults they can exclude them entirely. Likewise they can include vaults known only to be to their liking. There’s money to be made in biased data vaults.

Mr.
August 17, 2024 8:24 am

AI bots and Wikipedia are replete with “contributors” who are from the same cohort that I have to deal with regularly in real life.

I have relatives and friends who are locked in to leftist ideology.

Their stance on any topic is exactly whatever prevailing leftist mantra is dictating.

If I even try to find a common point of agreement with them by introducing an incontrovertible relevant fact about the matter, their heads start shaking from side to side and their eyes are closed before I’ve even finished stating the point.

Funnily enough, I have been noticing lately that as the Harris bid for US president “re-invents” (i.e. 189-degree turn-abouts) her previously stated policies and positions on various topics (with which my leftist friends totally agreed), they now support the new Harris campaign’s statements.

Be interesting to see how the AI bots respond to these turn-abouts.

Reply to  Mr.
August 17, 2024 9:27 am

There’s a new label that will be hard to escape from: Kamunism.

Mr.
Reply to  karlomonte
August 17, 2024 12:19 pm

Looking more like that every day.

The thin cross-over line from socialism to communism is well summarized in the lyrics of the “Rocky Horror Show” song “The Time Warp” lyrics –

“It’s just a jump to the left”

August 17, 2024 8:34 am

The preponderance of media is either left leaning or outright propaganda. A person has to be very diligent, cross checking multiple sources to be sure of pretty much anything out there. Consider the widely accepted narratives pushed by propaganda:
CAGW
6 foot distancing
Wear your paper mask
Safe and Effective
Jan 6 Insurrection
Joe is sharp as a tack

I could go on, but you get the idea. So it does not surprise me in the least that LLMs, trained on garbage spew out garbage. They are also implicitly (if not explicitly) trained that it is OK to lie in furtherance of a cause – and they do. I think CTM has cornered them more than once with very pointed questioning (maybe I am misremembering that).

Richard Greene
August 17, 2024 8:35 am

AI is like a polite leftist, repeating the party line like a well traine parrot … but not insulting the person who asked the question. The IPCC is the party line. Their wild guesses of the future climate are treated as the gospel. With 100% confidence.

Mr.
Reply to  Richard Greene
August 17, 2024 12:24 pm

and yet, the IPCC’s own stated position on climate predictions was –

“predictions of future climate states are not possible in our coupled, non-linear, chaotic system”

So clearly, many climate “scientists” do not subscribe to the “settled consensus science” mantra expounded by UN bludgers and other political parasites.

August 17, 2024 9:50 am

The built-in failure mode for AI is that, in almost all implementations, it searches the World Wide Web for synthesis of answers to questions based on the best consensus available . . . not recognizing that there is much more false, disinformation existing than there is true and accurate information. Just look at what has happened on social media platforms.

IOW, neither IT specialists nor AI programmers have yet developed a method for any computer system to actually distinguish truth from falsehood outside of pure mathematics. “Consensus” does not automatically equate to “truth”.

Net overall result for AI today: QIGO (Question In, Garbage Out).

Reply to  ToldYouSo
August 17, 2024 2:18 pm

With Claude.ai you can change its “mind” by providing examples contrary to what it presented. If you save the session the changed state will be preserved for the next interaction, but it won’t update its overall knowledge.

Reply to  scvblwxq
August 17, 2024 3:02 pm

Invites the question: are you really “changing its mind”, or instead is the AI just appeasing you so that you quit asking it troublesome questions??

Perhaps Claude.ai is talking to you in “parent mode”? . . . something that you may want to ask it in the future.

August 17, 2024 9:55 am

This is a function of how what a large language model is. It takes a lot of data – the more the better – and extracts views from that.

Obviously, that cannot be narrow. It cannot be rigidly traditional. It cannot be right-wing.

The more viewpoints that are admitted, the more liberal the opinions become. That is what being liberal means.

Reply to  MCourtney
August 17, 2024 12:07 pm

Your last sentence describes classical liberalism, those folks are few and far between. the modern day liberal brooks no opinion that disagrees with their own.

Mr.
Reply to  Nansar07
August 17, 2024 12:28 pm

So true Nansar07.

The term “liberal” as used to categorize today’s political position should be updated to –
“left-wing ideologues”.

Reply to  Nansar07
August 18, 2024 1:14 am

But that’s the point.
This is an academic paper in Plos One that is using the word “liberal” in it’s correct, technical sense.
The article is using the word “liberal” in it’s colloquial, on-line sense.

The article does not speak the same language as the paper that it is reporting on and so has misled a lot of people. Including, probably the author (Ross Pomeroy).

August 17, 2024 11:40 am

lets see.
at the bottom ALL LLMs use th same data, open source text, to train the models
the methods differ a bit here and there, but its basically the same garbage in.

as the author suggests the consistent left leaning bias could be due to

  1. the shared training data.
  2. similar fine tuning process.

what he forgets are the tests themselves. he chooses 11, why 11? because there is no standard test because political leaning is a subjective measurement nighmare

https://www.politicalcompass.org/

heres an example of one .

yake the test. ask yourself, does this test register me as right, as i am. or did it skew me toward center, which is a lft bias.

go ahead take the test
https://www.politicalcompass.org/test/en?page=1

try it three times.

  1. as yourself
  2. as your view of leftists
  3. as you view of right wingrs

hint: right left isnt measurable in people much less LLMs

Mr.
Reply to  Steven Mosher
August 17, 2024 12:56 pm

Thanks Steven.
I did one of these types of assessment years ago and it reported that my politics aligned with Nelson Mandela’s.
I didn’t know that Nelson was a border-line conservative.
But he was a rationalist, so I get that.

My assessment from the PoliticalCompass questionnaire you posted reveals –

Economic Left/Right: 1.0Social Libertarian/Authoritarian: -2.31

However, I thought that many of the answer choices should have also offered –

DEPENDS ON CONTEXT / PARTICULAR APPLICATION

(or just UNSURE)

political-chart
Verified by MonsterInsights