The alarming discovery by Human Rights Watch that millions of personal photos of children, including those posted under strict privacy settings, are being used to train artificial intelligence tools without their parents’ knowledge or consent is a stark reminder of the darker side of AI development. The analysis of the LAION-5B dataset, a popular tool used by AI developers, revealed identifiable photos of children, including Indigenous Australian and Brazilian kids, with accompanying captions that made it easy to trace their identities. The report raises serious concerns about privacy, consent, and the potential misuse of these images by AI tools to create deepfakes or manipulate children’s likenesses. The fact that these images were scraped from the internet, bypassing privacy measures, is a clear violation of children’s rights and highlights the need for stricter regulations and safeguards to protect minors’ online presence.

AI’s Dark Secret – Kids’ Photos Stolen for Training
Current AI tools create lifelike outputs in seconds, are often free, and are easy to use, risking the proliferation of nonconsensual deepfakes that could recirculate online forever and inflict lasting harm.
1–2 minutes










