The proliferation of generative AI tools has significantly altered content creation and distribution on digital platforms, posing challenges in managing low-quality and harmful AI content. Many platforms claim to base content moderation on human rights principles, highlighting freedom of expression and minimal intervention. However, these platforms are primarily driven by commercial goals, focusing on user attention and profit rather than ethical considerations. Companies like Snap and Meta have swiftly integrated AI features, prioritizing commercial benefits over user well-being, leading to the spread of problematic content. The role of spam policies in content curation reveals platforms’ extensive, often unchecked power to control content based on commercial interests. The vague and flexible nature of these policies facilitates the removal of content that platforms deem undesirable, often diverging from human rights principles. The reliance on automated tools for content moderation further exacerbates issues, reflecting technological biases and inadequate investment in addressing these problems. To understand platforms’ content moderation practices, it is crucial to scrutinize their spam policies and recognize the commercial motivations behind their decisions.

AI’s Double-Edged Sword – Platforms, Profit, and the Human Rights Dilemma
Platforms claim to uphold human rights but are driven largely by commercial interests.
1–2 minutes










