TikTok, the popular social media platform, recently made headlines when it accidentally released an internal version of its AI digital avatar tool without proper safeguards. This blunder allowed users to create videos with the avatars saying just about anything, including harmful and offensive content. CNN was the first to spot the glitch, generating videos featuring quotes from Hitler and a message telling people to drink bleach. The internal tool was meant to be restricted to users with a TikTok Ads Manager account, but the mistake allowed anyone with a personal account to access it. The proper version of the tool, launched earlier this week, is meant for businesses to generate ads using paid actors and AI-powered dubbing, within TikTok’s guidelines. However, the internal tool lacked the necessary watermarks disclosing that the videos are AI-generated. TikTok has since taken down the internal version, citing a “technical error” that affected only a small number of users. As a seasoned journalist, I find it concerning that such a significant mistake could occur, highlighting the need for stricter quality control measures in the development of AI-powered tools.

TikTok AI Tool Glitch
TikTok mistakenly posted a link to an internal version of its new AI digital avatar tool without guardrails, letting users create videos that say just about anything.
1–2 minutes










