Political Leanings of AI Language Models
A new study published in PLOS One reveals that large language models (LLMs) powering popular AI chatbots like ChatGPT, Gemini, and Claude tend to produce responses aligned with left-of-center political beliefs. The research, conducted by David Rozado from Otago Polytechnic University, involved administering 11 different political orientation tests to 24 major AI products, with each test repeated 10 times per model to ensure robust results.
Key Findings and Implications
- Most chatbots displayed left-leaning political preferences in their responses
- Five foundational models without human fine-tuning showed no strong political bias
- The study contrasts with previous research indicating right-wing bias on platforms like X
- Potential societal impact due to AI’s increasing integration in various aspects of life
The Broader Context
The discovery of political bias in AI chatbots raises concerns about the potential spread of viewpoint homogeneity and societal blind spots. As AI systems become more deeply integrated into work, education, and leisure, their influence on shaping human perceptions and opinions could be significant. This finding is particularly relevant in an election year, where politicians may seize upon such information to further their agendas.
The study also highlights the challenges in addressing this bias. While some may choose to avoid using chatbots altogether, creating politically diverse AI systems could exacerbate the issue of filter bubbles. The researcher suggests that AI systems should ideally be oriented towards truth-seeking, but acknowledges the difficulty in creating such unbiased systems.











