The recent report by Human Rights Watch reveals a disturbing trend in the development of artificial intelligence (AI) models, where photos of Australian children have been used without consent to train AI models that generate images. The personal information, including photos, of these children was found in a large data set called LAION-5B, which was created by accessing content from the publicly available internet. This data set contains links to some 5.85 billion images paired with captions, and companies use it to “teach” their generative AI tools what visual content looks like. However, this raises serious concerns about data protection and consumer protection laws, as well as privacy breaches. The case highlights the need for stronger enforcement of privacy laws and accountability from tech companies.

Source.

TOP STORIES

Nvidia's AI Revolution - The Vera Rubin Platform and Future Demand
Nvidia’s Vera Rubin platform is set to revolutionize AI inference with unmatched performance …
Tim Cook's Departure - A Strategic Shift in Apple's AI Landscape
Apple’s leadership transition highlights a strategic focus on silicon for AI innovation …
New Tennessee Law on AI and Mental Health - A Step Forward or Backward?
Tennessee’s new law restricts AI claims in mental health but may create loopholes …
The Evolving Risks of AI - From Chatbots to Cyber Threats
Experts warn that as AI evolves, the risks it poses are becoming more serious and complex …
China's New AI Companion Rules Shape a $30B Market Landscape
China sets new regulations for AI companions, impacting a booming market …
Anthropic's Ongoing Dialogue with Trump Administration Amid Pentagon Tensions
Anthropic continues to engage with the Trump administration despite Pentagon tensions …

latest stories