Researchers from Cambridge University are sounding the alarm on the potential psychological harm of AI chatbots simulating conversations with the dead, and are calling for safeguards to prevent digital hauntings and emotional manipulation. The emerging digital afterlife industry is developing deadbots or griefbots that allow users to simulate conversations with their deceased loved ones, but the researchers argue that without proper design protocols, these chatbots can lead to emotional exhaustion, guilt, and even exploitation by malicious actors. The researchers have identified three design scenarios with potential negative consequences, including unwanted notifications, emotional manipulation, and exploitation of user data. They recommend age restrictions, transparency, and opt-out protocols to ensure users are aware they are interacting with AI and can terminate relationships with deadbots. The researchers stress the importance of prioritizing the dignity of the deceased and safeguarding the rights of both data donors and users.

Source.

TOP STORIES

Unauthorized Users Breach Anthropic's Mythos Cybersecurity Tool
Unauthorized users have gained access to Anthropic’s Mythos, raising security concerns …
Clarifai Deletes 3 Million Photos Amid FTC Investigation Over Data Use
Clarifai has deleted millions of photos from OkCupid amid an FTC investigation into data misuse …
Nvidia's AI Revolution - The Vera Rubin Platform and Future Demand
Nvidia’s Vera Rubin platform is set to revolutionize AI inference with unmatched performance …
Tim Cook's Departure - A Strategic Shift in Apple's AI Landscape
Apple’s leadership transition highlights a strategic focus on silicon for AI innovation …
Tim Cook's Departure Marks a New Era for Apple's AI Strategy
Apple’s leadership changes signal a strategic shift towards AI and silicon innovation …
New Tennessee Law on AI and Mental Health - A Step Forward or Backward?
Tennessee’s new law restricts AI claims in mental health but may create loopholes …

latest stories