Understanding AI Risk Assessment
Bo Li, an associate professor at the University of Chicago, is leading a significant shift in how consulting firms evaluate AI models. Instead of focusing solely on the intelligence of these models, companies are now prioritizing their potential legal, ethical, and regulatory issues. Li, along with colleagues and partners from Virtue AI and Lapis Labs, has introduced a comprehensive taxonomy of AI risks. They also developed AIR-Bench 2024, a benchmark tool that assesses how well various AI models comply with safety and regulatory standards.
Key Findings
- The research analyzed AI regulations from the US, China, and the EU, and reviewed policies from 16 major AI companies.
- AIR-Bench 2024 uses thousands of prompts to evaluate AI models, revealing that Anthropic’s Claude 3 Opus excels in avoiding cybersecurity threats.
- Databricks’ DBRX Instruct scored the lowest in safety assessments, prompting the company to commit to ongoing improvements.
- The analysis indicates that government regulations are often less comprehensive than corporate policies, highlighting a need for stricter regulations.
Implications for the Future
Understanding AI risks is crucial for companies planning to use AI in various applications. Organizations must consider specific risks, such as the likelihood of generating offensive content, rather than just the model’s capabilities. The ongoing research and development of risk assessment tools like AIR-Bench 2024 will play a vital role in guiding companies through the complex landscape of AI safety. As AI technology evolves, so must the frameworks for evaluating its risks, ensuring that safety measures keep pace with advancements.











