The French Data Protection Authority, CNIL, has published its final guidelines on developing AI systems that reconcile with data protection challenges. The guidelines, divided into seven “AI how-to sheets,” provide guidance on determining the applicable legal regime, defining a purpose, determining the legal qualification of AI system providers, ensuring lawfulness of data processing, carrying out a data protection impact assessment, and taking into account data protection in data collection and management. Noteworthy takeaways include the need for a specific, explicit, and legitimate purpose for processing personal data, the importance of assessing the role of parties involved in AI system development, and the need for continuous monitoring and updating of data protection measures. The guidelines also highlight the importance of considering ethical issues and human rights in AI system development. Overall, the guidelines provide a comprehensive framework for organizations to develop AI systems that comply with the GDPR. My opinion is that these guidelines are a crucial step towards ensuring that AI systems are developed with data protection and privacy in mind, and will likely have a significant impact on the development of AI systems in France and beyond.

CNIL Unveils AI Guidelines
The CNIL states that the successful development of AI systems can be reconciled with the challenges of protecting privacy.
1–2 minutes










