The rise of artificial intelligence has sparked concerns about its real-world impact. While AI excels in many areas, it struggles with critical tasks like providing accurate election information. This issue highlights a broader problem: the lack of transparency around how people actually use AI systems in their daily lives.
Key points:
- AI systems often provide incorrect information on sensitive topics like elections
- Companies focus on theoretical risks through “red teaming” rather than real-world usage data
- Policymakers lack crucial information to prioritize and address AI-related concerns effectively
- Researchers cannot assess how well companies enforce their own usage policies
The absence of real-world usage data creates a significant blind spot for policymakers, researchers, and the public. Without this information, it’s challenging to distinguish between theoretical and actual risks, making it difficult to develop targeted regulations and allocate resources effectively. To address this issue, AI companies should share anonymized usage data with researchers, provide transparency reports, and allow users to voluntarily share their interactions. If companies are unwilling to do this voluntarily, lawmakers may need to intervene to ensure access to this crucial information.











