Grief tech: redefining death in the age of AI

Picture of Richard van Hooijdonk
Richard van Hooijdonk
AI-powered 'grief tech' challenges the finality of death, but at what cost? Explore the ethical dilemmas of digital immortality and its impact on the living.
  • Executive summary
  • Coping with loss isn’t easy
  • Welcome to the digital ‘afterlife’
  • The rise of grief tech start-ups
  • A soothing balm or a threat to mental health?
  • Is this really a good idea?
  • Learnings

Executive summary:

Few experiences are as profoundly impactful as losing a loved one. The void left behind can seem insurmountable, leaving us yearning for just a little bit more time with those we miss. This deeply human desire is fueling the rise of grief tech, which promises to forever change how we preserve the memory of those who are no longer alive.

  • There is a growing number of grief tech start-ups that allow people to have virtual conversations with AI chatbots of their deceased loved ones.
  • While some believe grief tech could help people find closure, there are also concerns that it could have a negative impact on their mental health.
  • “There is evidence from multiple studies that proximity seeking [behaviours aimed at restoring a closeness with the person who died] is actually linked with poorer mental health outcomes,” explains Dr Kirsten Smith, clinical research fellow at the University of Oxford.
  • “Who gets to decide what ‘helping people grieve’ means?” asks psychotherapist and grief consultant Megan Devine.
  • “People have always engaged in various kinds of rituals where things are done with the belongings of the deceased, where certain kinds of possessions are preserved,” remarks Dr Elaine Kasket, bereavement lead at the Digital Legacy Association. “This could be seen as a technological version of these analogue, physical rituals.”

One thing is certain: as AI continues to evolve, so too will our approach to death and remembrance. The challenge lies in harnessing this technology thoughtfully, ensuring that we do not commit further harm in our efforts to help. By extension, we cannot lose sight of our humanity in our pursuit of digital immortality.

Coping with loss isn’t easy

The loss of a loved one is one of the most painful experiences a person has to go through over the course of their life. Despite our best efforts to the contrary, death remains an inextricable part of human existence—it’s what awaits us all at the end of the journey. Knowledge of the inevitable, however, doesn’t necessarily make it any easier to accept loss. Their absence is felt even more profoundly when the loss is sudden, robbing you of the opportunity to say goodbye and find some sense of closure.

There is no universal formula for coping with grief; every person processes their loss differently. Some people will try to preserve the memory of their dearly departed by putting up photos on the wall, reading old letters or text messages, or listening to old voicemails. Others may occasionally visit their graves and speak to them as if they were actually there. Until now, this relationship between the living and the dead has been a one-way street.

The rise of artificial intelligence (AI) has created new avenues for closure. Indeed, what if the dead could talk back? This article will explore the rise of ‘grief tech’ and its implications—both good and bad—for how we process loss.

Welcome to the digital ‘afterlife’

Grief tech promises to keep people connected with their loved ones even after their deaths, but not without raising some serious ethical concerns.

Despite what your local medium may have you believe, the idea of communicating with the dead has long been confined to the realm of science fiction and fantasy. However, recent technological advances in AI have brought it one step closer to reality—in a manner of speaking. Of course, we are not talking about actually bringing people back from the dead, as that would be impossible. Rather, we are talking about creating digital replicas of the deceased to help the grieving process and accept their loss.

This emerging concept is increasingly being referred to as grief tech, and it involves using photos, text messages, audio and video recordings left behind by the departed to create a virtual clone that talks, sounds, and looks just like them. Grief tech has been made possible by the rise of generative AI, which has quickly proven itself capable of producing text near-impossible to distinguish from human-generated content. It can also accurately mimic a person’s voice based on just a few seconds of audio, and create surprisingly realistic video content.

It isn’t hard to see what the appeal of grief tech might be. It could provide people with another chance to do things they never got around to while the person was still alive: say their final goodbyes, tell them they love them, or bury old grievances. By enabling people to stay ‘connected’ with their loved ones even after they have passed away, grief tech could help them keep their memory alive and provide some comfort as they come to terms with the loss.

At the same time, the use of generative AI to create virtual representations of people who are no longer among us raises a ton of ethical concerns. Is this really a healthy way to process grief? Could prolonged interactions with AI facsimiles of the dead actually hinder the healing process and prevent people from accepting reality and moving on with their lives? There is also the matter of consent. Do we even have the right to digitally ‘resurrect’ someone who hasn’t already consented to it? And who will be responsible if the AI replica says something out of character that could potentially tarnish the dead person’s memory?

The rise of grief tech start-ups

Various AI platforms now enable people to interact with digital avatars of their departed loved ones, keeping their memory vividly alive.

There is a growing number of companies that use generative AI to create digital replicas of the deceased in either text, audio, or video form. One of the most notable examples is Seance AI, which allows users to engage in virtual conversations with chatbots that mimic the personalities and writing styles of their deceased loved ones. The user first needs to provide some key information about the deceased, including their name, age, relationship to the user, personality traits, and cause of death. They are also asked to include a portion of text from the deceased to give the AI a better idea of what they sounded like, which allows for a more accurate replication. Once the user inputs the required information, they are presented with a text box and can proceed to initiate a conversation.

Other companies have introduced the element of speech. Unlike Seance AI, HereafterAI doesn’t rely on fragments left behind by the deceased. Instead, it enables the individual whose personality will be captured in the chatbot to tell the story of their life in their own voice by participating in a series of interviews while they are still alive. These recordings are then used to create a ‘life story avatar’ of the person, which their family and friends will be able to interact with once they are gone. Whenever someone asks the avatar a question, the platform uses AI to scour recordings made by the deceased and find the most appropriate answer. While this limits the range of questions the avatar can respond to, it ensures that the answer is something the person actually said—not some AI fabrication.

Similar to HereafterAI, StoryFile enables people to preserve their memories for future generations by recording themselves answering questions on a wide variety of topics. The main difference between the two is that StoryFile captures both video and audio. The recordings can be made either at home using any camera-enabled device, or in the company’s studio to ensure the highest possible quality. Once the recordings are uploaded to the company’s platform, StoryFile uses them to provide the person’s loved ones with an interactive video experience that is similar to a natural face-to-face conversation, allowing them to relive some of their most cherished memories or even learn some new things about their deceased loved ones.

“I think we need to look at the outcome in the use of any tool. Does that AI image soothe you, make you feel still connected to her, bring you comfort in some way? Then it’s a good use case. Does it make you feel too sad to leave the house, or make you lean too heavily on substances or addictive behaviours? Then it’s not a good use case.”

Megan Devine, psychotherapist and grief consultant

A soothing balm or a threat to mental health?

While some experts concede that grief may provide comfort to some people, there is also a risk that it could have a negative impact on their mental health.

Experts largely agree that the biggest concern with this type of technology is that it could do more harm than good, and negatively impact the bereaved person’s mental health. “There is evidence from multiple studies that proximity seeking [behaviours aimed at restoring a closeness with the person who died] is actually linked with poorer mental health outcomes,” explains Dr Kirsten Smith, clinical research fellow at the University of Oxford. “Proximity-seeking behaviours may block someone from forging a new identity without the deceased person or prevent them from making new meaningful relationships. It might also be a way of avoiding the reality that the person has died—a key factor in adapting to the loss.”

Many also highlight their concerns about consent and how best to approach the rights of dead people. “I think there are dignitary rights even after somebody passes away, so it applies to their voices and their images as well. I also feel like this kind of treats the loved ones as kind of a means to an end,” argues Irina Raicu, the Internet Ethics Program director at Santa Clara University. “I think aside from the fact that a lot of people would just be uncomfortable with having their images and videos of themselves used in this way, there’s the potential for chatbots to completely misrepresent what people would’ve said for themselves.”

There are also some who believe there is nothing inherently wrong with grief tech. “Who gets to decide what ‘helping people grieve’ means?” asks psychotherapist and grief consultant Megan Devine. “I think we need to look at the outcome in the use of any tool. Does that AI image soothe you, make you feel still connected to her, bring you comfort in some way? Then it’s a good use case. Does it make you feel too sad to leave the house, or make you lean too heavily on substances or addictive behaviours? Then it’s not a good use case.”

Some experts argue that there isn’t much of a difference between having a one-way conversation at somebody’s grave and interacting with an AI representation of them. “People have always engaged in various kinds of rituals where things are done with the belongings of the deceased, where certain kinds of possessions are preserved,” remarks Dr Elaine Kasket, bereavement lead at the Digital Legacy Association. “This could be seen as a technological version of these analogue, physical rituals.” Her thoughts are echoed by Stefanie Schillmöller, an expert in trends surrounding death. “There’s a theory called ‘continuing bonds’, which says that it’s quite normal to have a kind of relationship with the deceased, to remember him or her, to have an inner dialogue. We may lose the person, but we don’t lose that relationship,” she adds.

Is this really a good idea?

The rise of grief tech raises serious concerns about potential abuse and emotional exploitation in our most vulnerable moments.

You don’t need a particularly vivid imagination to anticipate where things might go wrong. As painful as the death of a loved one may be, the experience plays an important role in our emotional development. Yes, we can probably all agree that everybody living forever would be great. They can’t though, and pretending otherwise can only be a recipe for disaster. While there is a case to be made that grief tech could be beneficial in certain scenarios, the whole concept needs to be approached with utmost care and caution—preferably under the supervision of a qualified professional.

There is a genuine risk that a grieving person could become emotionally dependent on the AI chatbot, preventing them from pursuing new, healthy relationships with other humans and diminishing their ability to distinguish between fantasy and reality. This naturally begs the question: What happens if the person can no longer afford to pay for the service or the company decides to shut it down? How will that impact their emotional state? The risk heightens exponentially if children are involved. Could the use of this technology hinder emotional development? What happens, then, if it is suddenly taken away?

Another concerning possibility is that some people may use the technology to resurrect us digitally after we are gone, whether we like it or not. Is there anything we can do to prevent this—a digital do-not-reanimate (DDNR) order of sorts? And what’s to stop someone from using grief tech to virtually recreate a living person—their ex, for instance? And do we have guard rails in place to prevent malicious actors exploiting this data to cause further pain for the bereaved? And what about the nightmarish possibility that a grief tech company sneaks in advertising under the guise of a loved one’s thoughts?

Learnings

Throughout history, humans have come up with countless ways to remember and honour the dead. From storytelling around campfires to photos on mantlepieces, we’ve always sought to keep the memory of our loved ones alive. Maybe—just maybe—grief tech could actually be the next step in this very human journey. But if it is, we will have to take that step with extreme caution.

  • Grief tech could potentially help people better cope with their grief and provide them with a sense of closure.
  • It could also enable future generations to get to know the ancestors they never got to meet in real life.
  • However, some experts are concerned that grief tech could actually hinder the healing process and prevent people from moving on with their lives.
  • There is also the issue of consent, with very little to protect people from being digitally resurrected against their will.

In the end, perhaps the most important question isn’t so much whether we can create these digital replicas of our lives, but whether it is right and decent to do so. Can we truly strike a balance between helping the living and not disrespecting the dead? Despite its uncertain merits, it appears that grief tech is here to stay.

Share via
Copy link