Overview of the Code of Practice
A draft Code of Practice has been published for providers of general-purpose AI models in the European Union, under the new AI Act. This draft aims to guide companies like OpenAI, Google, and Meta in meeting compliance requirements. Feedback is being sought until November 28, with the final version expected to be more detailed. The Code outlines expectations for transparency, risk assessment, and copyright compliance, focusing on models considered to have systemic risks.
Key Details of the Draft
- The draft spans 36 pages and is described as a high-level plan.
- Transparency requirements will take effect on August 1, 2025, with stricter rules for high-risk models by August 1, 2027.
- Providers must disclose data sources and have a single point of contact for rights holders.
- Systemic risks include cybersecurity threats, manipulation, and large-scale discrimination.
Importance of the Code
This draft Code is crucial for ensuring that powerful AI models operate responsibly and transparently. It addresses significant risks associated with AI deployment, which can impact society and democratic processes. By establishing a framework for compliance, the EU aims to protect users and mitigate potential harms from advanced AI technologies. The ongoing feedback process will shape the final guidelines, ensuring they reflect the concerns of various stakeholders, including industry experts and civil society.











