The development and use of Artificial Intelligence (AI) have sparked a global debate on how to regulate this technology to maximize its potential while mitigating its risks. As AI becomes increasingly integrated into our daily lives, governments, academia, civil society, and the private sector are working together to establish guidelines, principles, and laws to ensure the safe, reliable, ethical, and human-centered use of AI. From the OECD principles to the European Union’s AI law proposal, various international documents and initiatives have been established to address the implications of AI. In Argentina, efforts have been made to discuss and advance AI regulation, including the creation of an inter-ministerial committee and the publication of recommendations for carrying out public innovation projects based on AI. However, the regulatory landscape remains fragmented, with scattered laws and regulations that offer guidance on how to address situations related to AI. The process of regulating AI requires pluralistic debate and collaborative efforts to address open questions on how to manage the use of AI in specific areas, adjust existing regulations, and assign responsibilities in the construction of ethical AI.

AI Regulation – A Global Patchwork of Principles and Laws
The first step on this journey is to understand those policies and regulations that directly or indirectly aim to regulate the development and use of AI.
1–2 minutes










