TikTok, the popular social media platform, recently made headlines when it accidentally released an internal version of its AI digital avatar tool without proper safeguards. This blunder allowed users to create videos with the avatars saying just about anything, including harmful and offensive content. CNN was the first to spot the glitch, generating videos featuring quotes from Hitler and a message telling people to drink bleach. The internal tool was meant to be restricted to users with a TikTok Ads Manager account, but the mistake allowed anyone with a personal account to access it. The proper version of the tool, launched earlier this week, is meant for businesses to generate ads using paid actors and AI-powered dubbing, within TikTok’s guidelines. However, the internal tool lacked the necessary watermarks disclosing that the videos are AI-generated. TikTok has since taken down the internal version, citing a “technical error” that affected only a small number of users. As a seasoned journalist, I find it concerning that such a significant mistake could occur, highlighting the need for stricter quality control measures in the development of AI-powered tools.

Source.

TOP STORIES

Unauthorized Users Breach Anthropic's Mythos Cybersecurity Tool
Unauthorized users have gained access to Anthropic’s Mythos, raising security concerns …
Clarifai Deletes 3 Million Photos Amid FTC Investigation Over Data Use
Clarifai has deleted millions of photos from OkCupid amid an FTC investigation into data misuse …
Nvidia's AI Revolution - The Vera Rubin Platform and Future Demand
Nvidia’s Vera Rubin platform is set to revolutionize AI inference with unmatched performance …
Tim Cook's Departure - A Strategic Shift in Apple's AI Landscape
Apple’s leadership transition highlights a strategic focus on silicon for AI innovation …
Tim Cook's Departure Marks a New Era for Apple's AI Strategy
Apple’s leadership changes signal a strategic shift towards AI and silicon innovation …
New Tennessee Law on AI and Mental Health - A Step Forward or Backward?
Tennessee’s new law restricts AI claims in mental health but may create loopholes …

latest stories