OpenAI’s recent announcement of Media Manager, set for release in 2025, has sparked controversy in the creative community. This tool, supposedly designed to give creators control over their work’s use in AI training, fails to address the fundamental issue of intellectual property theft that has been central to OpenAI’s development.
The core problem lies in OpenAI’s foundational models, which were built using creative professionals’ work without consent or compensation. Media Manager’s opt-out approach puts the burden on creators to protect their work, rather than addressing OpenAI’s past transgressions. This approach is akin to a thief offering victims the chance to opt out of future burglaries.
Key points:
- OpenAI’s foundational models were built using creators’ work without permission
- Media Manager fails to address past IP theft and puts the burden on creators
- Creative professionals have consistently demanded consent and compensation
- Legal actions against AI companies are ongoing, including suits by authors and visual artists
The broader implications of this issue extend beyond individual creators to the entire creative industry. With authors’ earnings declining sharply over the past decade, AI-generated content threatens to further erode their livelihoods. Meanwhile, AI companies like OpenAI continue to see their valuations soar into the billions.
This situation highlights the urgent need for creative professionals to unite and demand fair compensation and control over their work. The future of creative industries depends on establishing ethical practices in AI development that respect intellectual property rights and fairly compensate creators for their contributions.











