Understanding the New AI Compliance Landscape
The European Union has taken significant steps to regulate artificial intelligence through a risk-based framework that became effective in August. This framework is designed to ensure AI applications and models adhere to specific legal obligations. With ongoing development of Codes of Practice, the focus is now on evaluating compliance, especially for large language models (LLMs) which are foundational to many AI applications. LatticeFlow AI, a spin-off from ETH Zurich, has introduced a groundbreaking initiative aimed at interpreting the EU AI Act and providing a validation framework for LLMs.
Key Highlights of LatticeFlow’s Initiative
- LatticeFlow has launched Compl-AI, the first technical interpretation of the EU AI Act, mapping regulatory requirements to technical standards.
- The initiative includes an open-source LLM validation framework and a compliance leaderboard for major AI models like OpenAI’s GPT and Meta’s Llama.
- Evaluations cover 27 benchmarks, assessing aspects like toxic responses, prejudiced answers, and adherence to harmful instructions.
- Results reveal mixed performance, with notable strengths in avoiding harmful instructions but weaknesses in consistency and fairness across models.
The Importance of Compliance in AI Development
The new framework is crucial as it shifts focus towards compliance in AI development. As deadlines for compliance with the EU AI Act approach, AI developers will need to prioritize safety, fairness, and robustness in their models. This initiative not only aims to guide current AI technologies but also sets a precedent for future regulatory assessments. By inviting collaboration from the AI research community, LatticeFlow seeks to refine and enhance the framework, ensuring it adapts to evolving regulations and supports responsible AI innovation.











