The alarming discovery by Human Rights Watch that millions of personal photos of children, including those posted under strict privacy settings, are being used to train artificial intelligence tools without their parents’ knowledge or consent is a stark reminder of the darker side of AI development. The analysis of the LAION-5B dataset, a popular tool used by AI developers, revealed identifiable photos of children, including Indigenous Australian and Brazilian kids, with accompanying captions that made it easy to trace their identities. The report raises serious concerns about privacy, consent, and the potential misuse of these images by AI tools to create deepfakes or manipulate children’s likenesses. The fact that these images were scraped from the internet, bypassing privacy measures, is a clear violation of children’s rights and highlights the need for stricter regulations and safeguards to protect minors’ online presence.

Source.

TOP STORIES

Unauthorized Users Breach Anthropic's Mythos Cybersecurity Tool
Unauthorized users have gained access to Anthropic’s Mythos, raising security concerns …
Clarifai Deletes 3 Million Photos Amid FTC Investigation Over Data Use
Clarifai has deleted millions of photos from OkCupid amid an FTC investigation into data misuse …
Nvidia's AI Revolution - The Vera Rubin Platform and Future Demand
Nvidia’s Vera Rubin platform is set to revolutionize AI inference with unmatched performance …
Tim Cook's Departure - A Strategic Shift in Apple's AI Landscape
Apple’s leadership transition highlights a strategic focus on silicon for AI innovation …
Tim Cook's Departure Marks a New Era for Apple's AI Strategy
Apple’s leadership changes signal a strategic shift towards AI and silicon innovation …
New Tennessee Law on AI and Mental Health - A Step Forward or Backward?
Tennessee’s new law restricts AI claims in mental health but may create loopholes …

latest stories