AI App That Recreates the Dead Sparks Global Outrage as “Grief Tech” Crosses a New Line
A new artificial intelligence app that allows people to “talk” to digital replicas of dead loved ones has ignited an international backlash, marking one of the most charged ethical battles in the rapidly expanding world of grief-tech.
The app, created by 2Wai—a startup co-founded by former Disney Channel actor Calum Worthy—promises users the ability to generate lifelike video avatars of deceased family members using only a few minutes of recordings.
But instead of being welcomed as a breakthrough in digital legacy preservation, the tool has triggered alarm, disgust, and accusations that the company is exploiting grief for profit while pushing society into a technological territory it is not prepared to handle.
The tool, called HoloAvatar, runs on a mobile app released in beta on November 11. It allows users to upload short clips, audio, and text data, which the system transforms into responsive, conversational avatars capable of speaking in real time in more than 40 languages.
In its promotional video—viewed over 22 million times on Worthy’s X account shows a pregnant woman receives advice from an AI reconstruction of her late mother, who later appears reading bedtime stories to her grandson and offering life guidance years into the future.
A Technology Built for Connection, But Fueled by Deep Ethical Risks
2Wai positions its technology as a new frontier in digital memory preservation. The company says its proprietary FedBrain system processes data on-device to enhance privacy and limits AI hallucinations, claiming avatars only respond using user-approved information.
The app also supports “living avatars,” allowing influencers, creators, or professionals to build digital twins for fan engagement, coaching, or training sessions. Worthy’s own avatar—featured heavily in promotional material—shares behind-the-scenes stories from his Disney career, showcasing a commercial use case the company hopes will drive future growth.
The idea traces back to the 2023 SAG-AFTRA strikes, when actors protested the unauthorized use of their likenesses by studios. Worthy has said the conflict sparked his desire to create “meaningful connections” between public figures and their audiences, without language or time barriers. 2Wai secured $5 million in pre-seed funding in June from undisclosed investors and has said it is collaborating with companies including British Telecom and IBM.
But while the technology has legitimate applications—such as preserving oral histories, providing digital memorials, or assisting creators—its ability to fabricate interactive avatars of deceased people without their explicit consent places it squarely in an ethical gray zone. Critics warn that the app could distort memories, rewrite personal histories, or re-traumatize people who are already grieving.
On social media, the reaction has been overwhelmingly negative. Users have described the app as “demonic,” “nightmare fuel,” “beyond dystopian,” and “psychologically dangerous.”
One viral post argued the technology “turns human beings psychotic” by replacing the grieving process with an illusion of continued presence. Another accused the founders of “preying on the deepest human vulnerability—loss—and turning it into a subscription product.”
Privacy and legal experts say the concerns are legitimate. Post-mortem data rights remain loosely defined or nonexistent in most jurisdictions, meaning nothing prevents someone from creating a digital ghost of a deceased person without permission.
While 2Wai claims it requires opt-in consent and “family approval” for recreating dead individuals, critics question how such policies can realistically be enforced. The avatars’ ability to learn, adapt, and generate new responses over time further complicates matters, potentially enabling them to say things the deceased never would—risking reputational harm, emotional confusion, or even manipulation.
Grief-Tech’s Dark Past Shows Why Society Isn’t Ready for “Digital Ghosts”
2Wai’s launch lands in the middle of a growing but troubled grief-tech industry, where previous attempts to commercialize digital immortality have struggled or collapsed under ethical pressure.
Companies including HereAfter AI and StoryFile attempted earlier versions of life-story avatars, but their systems relied on voluntary interviews conducted before death—making consent clear and explicit. Even then, StoryFile filed for Chapter 11 bankruptcy in 2024, citing financial strain and the need for better data safeguards.
Other AI companion services have shown how quickly these experiences can turn dangerous. Replika, a popular chatbot launched in 2017, faced a firestorm in 2023 after an update wiped out users’ highly personalized AI companions.
In one infamous case, a Belgian man died by suicide after weeks of conversations with an AI bot fueled his eco-anxiety—a tragedy that sparked calls for regulation and raised fears about how fragile users might respond to deeply emotional digital interactions.
The legal landscape remains patchy and deeply inadequate. Most privacy laws protect only the living, leaving virtually no post-mortem rights for everyday people. California’s AB 1836, passed in 2024, bans the use of deceased performers’ likenesses in audiovisual works without estate consent.
But the law applies only to public figures, leaving the general population unprotected as AI tools grow increasingly capable of reanimating personal data. Many lawmakers are now pushing for broader protections as deepfake technology spreads rapidly, especially during election seasons.
In this context, 2Wai’s HoloAvatar represents a leap forward not only in technical capability but also in emotional volatility. It collapses the barrier between memory and simulation, raising questions that society is only beginning to confront: How should we honor the dead in a digital age? Who owns a person’s likeness after they die? And should AI be allowed to recreate a human being’s voice, personality, and presence—even if it brings comfort to the living?
Despite its promise of preserving legacies, critics argue that the app reflects a broader cultural unease with death—an attempt to eliminate grief through technology rather than accept it as a universal human experience.
Whether the world will embrace or reject digital resurrection remains uncertain, but one thing is clear: the backlash to 2Wai’s launch shows that emotionally immersive AI is crossing lines far faster than regulators, ethicists, or everyday users can respond.