A recent study by GroundTruthAI found that popular AI-powered chatbots, including Google’s Gemini 1.0 Pro and OpenAI’s ChatGPT, provided incorrect information about voting and the 2024 election 27% of the time. The study, which sent 216 unique questions to the chatbots, revealed that even the most advanced models, such as GPT-4o, only answered correctly 81% of the time. This raises concerns about the reliability of AI-generated information, particularly when it comes to critical topics like voting. As AI becomes increasingly integrated into our daily lives, it’s essential to recognize the potential risks of relying on flawed information. The study’s findings should serve as a warning to companies and individuals alike to exercise caution when relying on AI-generated content.

Election Confusion
Researchers sent 216 unique questions to Google’s Gemini 1.0 Pro and OpenAI’s GPT-3.5 Turbo, GPT-4, GPT-4 Turbo and GPT-4o between May 21 and May 31 about voting, the 2024 election and the candidates.
1–2 minutes










