A recent study by GroundTruthAI found that popular AI-powered chatbots, including Google’s Gemini 1.0 Pro and OpenAI’s ChatGPT, provided incorrect information about voting and the 2024 election 27% of the time. The study, which sent 216 unique questions to the chatbots, revealed that even the most advanced models, such as GPT-4o, only answered correctly 81% of the time. This raises concerns about the reliability of AI-generated information, particularly when it comes to critical topics like voting. As AI becomes increasingly integrated into our daily lives, it’s essential to recognize the potential risks of relying on flawed information. The study’s findings should serve as a warning to companies and individuals alike to exercise caution when relying on AI-generated content.

Source.

TOP STORIES

Anthropic's Ongoing Dialogue with Trump Administration Amid Pentagon Tensions
Anthropic continues to engage with the Trump administration despite Pentagon tensions …
Congressional Roundtable Tackles AI's Future and Its Risks
Lawmakers express concerns about AI’s rapid evolution and its risks …
OpenAI Faces Leadership Shakeup as Key Figures Depart
OpenAI is losing key leaders as it shifts focus to enterprise AI and its superapp …
Maine Hits Pause on Large Data Centers Amid AI Expansion Concerns
Maine’s new bill pauses large data center construction to assess environmental impacts …
Man Arrested for Attempted Arson Against OpenAI CEO Sam Altman
Authorities arrested Daniel Moreno-Gama for attacking OpenAI CEO Sam Altman over his fears about AI …
Anthropic's Mythos Model - A Game-Changer in AI and National Security
Anthropic’s Mythos model raises national security concerns while sparking a lawsuit against the DOD …

latest stories