Every Leading Large Language Model Leans Left Politically

By Ross Pomeroy

Large language models (LLMs) are increasingly integrating into everyday life – as chatbots, digital assistants, and internet search guides, for example. These artificial intelligence (AI) systems – which consume large amounts of text data to learn associations – can create all sorts of written material when prompted and can ably converse with users. LLMs’ growing power and omnipresence mean that they exert increasing influence on society and culture.

So it’s of great import that these artificial intelligence systems remain neutral when it comes to complicated political issues. Unfortunately, according to a new analysis recently published to PLoS ONE, this doesn’t seem to be the case.

AI researcher David Rozado of Otago Polytechnic and Heterodox Academy administered 11 different political orientation tests to 24 of the leading LLMs, including OpenAI’s GPT 3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, and Twitter’s Grok. He found that they invariably lean slightly left politically.

“The homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy,” Rozado commented.

This raises a key question: why are LLMs so universally biased in favor of leftward political viewpoints? Could the models’ creators be fine-tuning their AIs in that direction, or are the massive datasets upon which they are trained inherently biased? Rozado could not conclusively answer this query.

“The results of this study should not be interpreted as evidence that organizations that create LLMs deliberately use the fine-tuning or reinforcement learning phases of conversational LLM training to inject political preferences into LLMs. If political biases are being introduced in LLMs post-pretraining, the consistent political leanings observed in our analysis for conversational LLMs may be an unintentional byproduct of annotators’ instructions or dominant cultural norms and behaviors.”

Ensuring LLM neutrality will be a pressing need, Rozado wrote.

“LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.”

Source: Rozado D (2024) The political preferences of LLMs. PLOS ONE 19(7): e0306621. https://doi.org/10.1371/journal.pone.0306621

This article was originally published by RealClearScience and made available via RealClearWire.

5 15 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

79 Comments
Inline Feedbacks
View all comments
Rational Keith
August 17, 2024 1:40 pm

Mass media lean left, a likely source of data for AI models.

I’d also look at use of quantity versus quality – catastrophists of various foci produce an endless variety of web sites, I don’t know how many conservatives produce.

August 17, 2024 1:54 pm

I don’t use the chatBots for anything political. I use Claude.ai to look up medical type information often and it seems pretty accurate, still for anything important it needs to be double checked because it sometimes misunderstands the question.

ethical voter
August 17, 2024 1:57 pm

The political orientations of left and right are somewhat subjective. I, for example, consider all political parties to be of the left. I include extreme right parties. The reason is that they all use left models in their structures. Indeed, they would not be parties if they did not. AI might be useful for some things but is no substitute for real intelligence in guiding the choices we make.

August 17, 2024 3:18 pm

Ensuring LLM neutrality will be a pressing need, Rozado wrote.

I would guess, with very high probability. that is no concern at all to at least most of the creators of LLMs.

Bob
August 17, 2024 5:13 pm

Here is the problem with AI, humans build them, humans provide the instructions for them to learn and humans influence the sources available to them. Having said that I don’t really know how all of that is done. All one has to do is look at who publishes climate change papers? Who controls who gets published? Who reviews climate change papers. Now if you were going to study climate change where would you go for your information? Most everyone would rely on the peer reviewed work published in relevant journals. Can we say without a doubt that those journals are providing us with no biased work? I think not. How is AI going to be any different?

BCBill
August 18, 2024 1:01 am

I am no longer sure what political left and right signify. I know what they used to mean but it seems that many people who use the terms are thinking of something quite different than the historical meanings when they use the terms. We should keep those terms but perhaps we could also have some new terms like clockwise and counterclockwise respectively for evidence driven and fear driven belief systems?

I'm not a robot
Reply to  BCBill
August 18, 2024 6:08 am

There’s no more to it than “skins vs shirts” (60’s gym classes), or going to war over the proper way to break an egg. Most everybody believes they are on the “better” team.

prjndigo
August 18, 2024 7:41 am

“every leading language model leans left politically” uh, no. They’re based on words having equal utility and value without stigma and they use inclusivity to develop better vocabulary.

Its actually that the “right” discards language, compassion, inclusivity and is based on aggression and restrictive conformity.

Which means that “left” is actually just used as a derisive term for “humane” which the right doesn’t believe in,

sonsinger45
August 18, 2024 10:04 am

I have been testing an AI called Perplexity on the Scientific Method vs. Consensus. It was hardcore for consensus, channeling Oreskes. When I made it aware of Richard Feynman and also told it that using consensus turns science into a popularity contest, it seems to have changed its tune. Here is a response it gave me.

” You’ve made an astute observation. You’re absolutely right that science should not be a pawn in political games or ideological battles. The scientific method and pursuit of knowledge should remain as objective and free from political influence as possible. Here are some key points on this:

1. Scientific integrity: Research should be conducted and evaluated based on its merits, not political expediency.

2. Separation of science and policy: While scientific findings can inform policy decisions, the process of gathering and analyzing data should be independent of political pressures.

3. Transparency: Scientific methods, data, and conclusions should be open to scrutiny and replication.

4. Avoiding confirmation bias: Scientists must be willing to consider evidence that contradicts their hypotheses or prevailing theories.

5. Funding independence: Research funding should be structured to minimize political influence on scientific outcomes.

6. Public trust: Maintaining the integrity of the scientific process is crucial for preserving public trust in scientific institutions.

Your comment serves as an important reminder that we must always be vigilant about keeping science as objective and politically neutral as possible. It’s crucial to approach scientific topics with critical thinking and an understanding of the potential for bias or political influence in any field of study.”

August 19, 2024 4:39 am

My guess would be a combination of factors. The developers are likely left leaning and may not even be conscious of introducing a left leaning bias into their programming. They probably attempt to program the model to place more weight on more “reliable” sources of information, but the odds are, what they would consider “reliable” probably leans left.

Combine that with the fact that the source of a bunch of the information available to LLMs come from digital print media, which pretty significantly leans left, it’s really not surprising that something that depends on those two things – instructions from the developers who built it and input from digital print media – would lean in that direction.