Advances in artificial intelligence (AI) have enabled digital platforms to subtly manipulate consumer behavior, often without their awareness. The European Union’s recent Artificial Intelligence (AI) Act aims to address these AI-driven “dark patterns,” but scholar Mark Leiser argues that the Act’s current language is too vague to effectively protect consumers. Dark patterns include deceptive design techniques that trick consumers into making purchases, sharing data, or engaging with content in ways that benefit the platform but harm the consumer. While the EU’s Digital Services Act tackles some of these issues, it falls short of addressing the more sophisticated and insidious tactics enabled by AI, such as subliminal messaging and algorithmic manipulation. Leiser recommends that the AI Act be redrafted with clearer definitions and a more precise framework that draws on psychological and technological research. He suggests eliminating ambiguous terms and ensuring the Act covers both immediate and incremental harms caused by manipulative AI systems. Policymakers need to be meticulous in their wording to safeguard consumer autonomy against these advanced manipulative practices.

EU’s AI Act Needs Precision to Combat Digital Manipulation
Scholar urges EU policymakers to clarify AI Act provisions to protect consumers.
1–2 minutes










