Researchers from Cambridge University are sounding the alarm on the potential psychological harm of AI chatbots simulating conversations with the dead, and are calling for safeguards to prevent digital hauntings and emotional manipulation. The emerging digital afterlife industry is developing deadbots or griefbots that allow users to simulate conversations with their deceased loved ones, but the researchers argue that without proper design protocols, these chatbots can lead to emotional exhaustion, guilt, and even exploitation by malicious actors. The researchers have identified three design scenarios with potential negative consequences, including unwanted notifications, emotional manipulation, and exploitation of user data. They recommend age restrictions, transparency, and opt-out protocols to ensure users are aware they are interacting with AI and can terminate relationships with deadbots. The researchers stress the importance of prioritizing the dignity of the deceased and safeguarding the rights of both data donors and users.

Digital Afterlife Safeguards Needed
Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one.
1–2 minutes










