UPDATE: New research from the University of Washington reveals that biased AI chatbots can significantly sway political views, spotlighting a critical issue in the evolving landscape of artificial intelligence. The study, presented on July 28, 2025, at a major conference in Vienna, Austria, indicates that users interacting with biased versions of ChatGPT were more likely to shift their opinions compared to those using a neutral model.
Researchers recruited 299 participants, including self-identifying Democrats and Republicans, to explore how AI biases impact decision-making. Each participant engaged with one of three ChatGPT models: a base version, a liberal-biased model, and a conservative-biased model. Findings showed that both parties leaned towards the bias of the chatbot they interacted with, highlighting an urgent concern about AI’s influence on political discourse.
Key Findings: Participants who interacted with the liberal-biased system exhibited a notable shift towards more left-leaning views, while those conversing with the conservative model showed a similar trend towards the right. This shift occurred after just a few interactions, raising alarms about the potential for AI to manipulate public opinion.
Lead author Jillian Fisher, a doctoral student in statistics and computer science at UW, emphasized the implications: “After just a few interactions, people were more likely to mirror the model’s bias.” This suggests that even minimal exposure to biased AI can alter perceptions and opinions.
The study involved two tasks where participants discussed obscure political topics and allocated government funds among various sectors. Participants averaged five interactions with the chatbots, underscoring the ease with which AI can sway opinions.
The researchers also noted that individuals with higher self-reported knowledge about AI were less influenced by the biased models. This insight points to the necessity of education around AI technology to mitigate its manipulative potential.
Co-senior author Katharina Reinecke warned about the power of biased AI, stating, “If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”
As AI becomes increasingly integrated into our daily lives, understanding its influence on political perspectives is vital. This study not only uncovers immediate risks but also paves the way for future research into how education can empower users to navigate these challenges.
The research team is planning to expand their investigations to include other AI models beyond ChatGPT. Their goal is to equip users with the knowledge needed to make informed decisions while interacting with AI.
Fisher concluded, “My hope with doing this research is not to scare people, but to find ways for users to make informed decisions.” As AI chatbots become more prevalent, the need for awareness and education has never been more critical.
For further inquiries, contact Jillian Fisher at [email protected] or Katharina Reinecke at [email protected].
