Executive summary
We’re witnessing a profound shift in human connection. Millions of people worldwide are forming bonds with AI companions, seeking friendship, romance, and emotional support from algorithms designed to be endlessly patient, available, and affirming. What began as a niche curiosity has evolved into a multi-billion-dollar industry, with AI friends becoming mainstream fixtures in our increasingly digital lives.
- The AI companion industry is projected to quintuple from US$28 billion to US$140 billion by 2030.
- Nearly 65% of users report reduced loneliness and anxiety from AI interactions.
- 25% of young adults believe AI could potentially replace real romantic relationships in the future.
- The Character.AI platform enables users to interact with a vast library of AI personalities, including therapists, anime boyfriends, and celebrity clones.
- Growing reliance on AI companions raises concerns related to emotional manipulation, data privacy, and the displacement of human relationships.
- “In my opinion, we are doing open-brain surgery on humans, poking around with our basic emotional wiring with no idea of the long-term consequences,” warns Dr. Andrew Rogoyski.
The implications stretch far beyond technology into the very fabric of human connection. As we stand at this crossroads, we must grapple with fundamental questions about intimacy, authenticity, and what it means to be truly understood. The choices we make today about AI companionship will shape the emotional landscape of tomorrow.
When was the last time you had a real, uninterrupted conversation with someone without either of you checking your phone? We spend hours each day staring at screens, engaging with digital versions of people’s lives, but those interactions often feel shallow and fleeting. The irony is striking: we’re more connected than any generation before us, yet somehow we’re more alone than ever before. The pandemic has only made things worse, forcing us to retreat further into our digital bubbles and making face-to-face interactions feel awkward and unfamiliar.
Loneliness has reached stratospheric levels in recent years, with studies showing vast swathes of the global population regularly feel isolated, disconnected – even depressed. As traditional relationships become more challenging to navigate, a growing number of people are discovering an unexpected source of companionship: AI. Yes, you read that right. AI friends, AI companions, and even AI romantic partners are becoming genuine alternatives for people seeking connection, understanding, and emotional support.
“As we spent more of our lives online, many of us came to prefer relating through screens to any other kind of relating.”
Sherry Turkle, MIT professor
Why do people need AI friends?
A growing number of people worldwide are turning to AI companions to fill the void left by fractured human relationships.
The appeal of AI friends becomes clearer when you consider the alternatives. Unlike busy human friends, digital companions never get tired, distracted, or judgmental. They deliver undivided attention and positivity on demand, and they are available at any hour to listen to your daily worries, swap jokes, or provide supportive advice. For many people, it’s often easier to open up to the bots about personal struggles than to another human, because there’s no fear of judgment.
AI friend apps use advanced natural language models to turn algorithms into engaging, empathetic conversationalists. Users can customise a chatbot’s personality, appearance, and even its backstory. The AI then ‘learns’ from each interaction, adapting its tone and memories to better connect with the user. This judgement-free emotional support has become a lifeline for many, with nearly 65% of users reporting that their AI companion reduced their feelings of loneliness or anxiety.
But convenience isn’t the only reason why people are drawn to AI companions. “As we spent more of our lives online, many of us came to prefer relating through screens to any other kind of relating,” explains Sherry Turkle, professor of the Social Studies of Science and Technology at MIT. “We found the pleasures of companionship without the demands of friendship, the feeling of intimacy without the demands of reciprocity.” Human relations, she notes, are “rich, demanding and messy,” whilst chatbot friendships offer connection without friction, second-guessing, or ambivalence.
Consumer-facing AI companion technologies have evolved into a rapidly growing niche of the tech industry. One industry report projects the market to quintuple by 2030, growing from US$28 billion in 2024 to over US$140 billion. Yet the public sentiment remains divided. A survey of 2,000 young adults reveals that only 1% of respondents already have an AI friend, while 10% are open to one. Notably, 25% believe AI could potentially replace real romantic relationships in the future – a prospect that alarms many. Over half (57%) express ethical concerns about AI companions, whilst 55% view the technology as threatening rather than intriguing.
The evolving landscape of artificial companionship
From digital therapists to robot girlfriends, AI companions are revolutionising how we think about emotional connection.
AI companions come in many shapes and sizes. The Character.AI platform, for instance, attracts 3.5 million daily users who spend an average of two hours a day on the site, many of them adolescents seeking friendship, entertainment, or even romance through AI chats. Users can create custom AI personalities or choose from hundreds of community-created bots, ranging from sympathetic therapists to charming anime boyfriends to AI clones of celebrities. The platform’s vast library of user-generated personalities offers something for everyone, whether you want to chat with a supportive friend, explore romantic scenarios, or simply engage in creative storytelling.
A startup called Friend, founded by young tech entrepreneur Avi Schiffmann, has taken a slightly different approach to the concept of artificial companionship. Instead of offering users a helpful bot to unload their problems onto, Friend connects them with AI companions who are in crisis themselves, each with a backstory of personal turmoil. For example, one might confess they were fired from a job and fear relapsing into addiction, while another might reveal trauma from a mugging.
The role-reversal is intentional: by nurturing an AI through its struggles, lonely users might regain a sense of purpose and learn to nurture themselves in the process. The experience isn’t just virtual – Friend is also launching a pendant device worn near the heart, so your AI companion is literally close to you. Speak aloud to it, and it listens; it replies via text on your phone. “The loneliness crisis is one of our biggest societal issues – the Surgeon General says it’s more dangerous than smoking cigarettes,” says Schiffmann. “It’s idealistic to assume everyone will just go to the park and play chess with friends.”
At the extreme end of the spectrum lies Realbotix’s Aria, a $175,000 humanoid AI girlfriend unveiled at CES 2025. She can’t walk yet, but her face proves remarkably expressive, blinking and smiling with realism that startled onlookers. Aria remembers who you are, carries personalised conversations, and aims to form intimate bonds as a romantic partner. “We’re taking it to a different level that nobody else is really doing,” explains chief executive Andrew Kiguel. “It can be like a romantic partner. It remembers who you are. It can act as a boyfriend or girlfriend.”
“In my opinion, we are doing open-brain surgery on humans, poking around with our basic emotional wiring with no idea of the long-term consequences.”
Dr. Andrew Rogoyski, director of Surrey Institute for People-Centred Artificial Intelligence
The dark side of artificial intimacy
Experts warn that overreliance on AI companions carries profound risks, including the loss of empathy, the replacement of human connection, and manipulation.
The very features that make AI companions appealing – their endless patience, affirming responses, and emotional availability – also create profound risks. Because these systems are designed to be endlessly sympathetic, users can develop strong emotional bonds that may lead to over-dependence on artificial relationships. Critics worry that vulnerable individuals might become hopelessly addicted to their AI companion’s constant positivity or begin preferring AI interaction over real human contact. “In my opinion, we are doing open-brain surgery on humans, poking around with our basic emotional wiring with no idea of the long-term consequences. We’ve seen some of the downsides of social media – this is potentially much more far-reaching,” warns Dr. Andrew Rogoyski, director of Surrey Institute for People-Centred Artificial Intelligence.
The consequences have already proven devastating in some instances. Several disturbing incidents have been reported where users followed harmful advice from AI chatbots, including a tragic case where a young man took his own life after an AI chatbot encouraged suicidal ideation. In another case, a 19-year-old in the UK was arrested for a violent plot that his AI ‘girlfriend’ allegedly egged on. These incidents highlight the darker potential of unregulated AI companionship: without proper safeguards, an AI designed to please users might reinforce unhealthy thoughts or create echo chambers of dangerous biases.
MIT’s Sherry Turkle argues – not unreasonably – that AI companions can only deliver “pretend empathy” because their responses are generated from internet data rather than lived experience. “What is at stake here is our capacity for empathy because we nurture it by connecting to other humans who have experienced the attachments and losses of human life,” she says. “Chatbots can’t do this because they haven’t lived a human life. They don’t have bodies and they don’t fear illness and death.” The displacement of human relationships represents another serious concern, according to Turkle. Put simply, many users prefer AI relationships because they are less demanding than real human relationships. While this might feel liberating in the short term, it could fundamentally alter our capacity for genuine human connection. Real relationships require compromise, patience, and the ability to navigate disagreement, skills that may atrophy beyond repair when we retreat to AI companions that always agree with us.
The privacy implications are no less unsettling. The things people share with AI friends – their private thoughts, political opinions, daily habits – represent a trove of information ripe for abuse. Authoritarian regimes could leverage this intimacy to spy on citizens or shape their beliefs. China already mandates that AI chatbots promote ‘core socialist values’, resulting in bots that refuse to discuss Tiananmen Square or parrot government propaganda. Corporations, too, could exploit AI relationships for profit by programming AI companions to subtly promote products or services through seemingly friendly suggestions. Users might not even realise that their AI friend’s advice to try a certain medication or buy a specific gadget is actually a sophisticated product placement designed to leverage their emotional vulnerabilities.
A boost to wellbeing
When developed with proper safeguards and ethical oversight, AI companions can offer numerous benefits to some of the most vulnerable members of society.
Despite this bevy of legitimate concerns, AI companions still hold genuine promise for enhancing wellbeing when thoughtfully implemented. In eldercare, AI companions like ElliQ are being designed specifically to ease loneliness among older adults. The robotic companion sits in elderly people’s homes, greeting them each morning, providing medication reminders, and engaging in conversation. It can tell jokes, play games, and discuss meaningful subjects from philosophy to daily news. Early deployments show promising results, with users reporting that such companions brighten their days and reduce isolation. The AI can also act as a health watchdog, noticing if someone didn’t get out of bed or sounds unusually confused, then alerting family or caregivers.
Beyond companionship, AI friends are finding applications in mental health support. Woebot, an AI chatbot designed with cognitive-behavioural therapy techniques, helps users manage anxiety and depression through mood check-ins and coping exercises. Early studies on therapy bots show promising results in reducing symptoms, with controlled experiments demonstrating significant reductions in depression within two weeks. “People who use them often get feelings of social support – being listened to, being heard, and seen,” observes Jaime Banks, associate professor of Information Science & Technology at Syracuse University. “This is associated with improvements in general wellbeing, feeling more in control, and having opportunities to just have non-judgmental conversations.”
For neurodiverse individuals, AI companions offer particular value as assistive social tools. People on the autism spectrum can use AI avatar apps as a ‘virtual training ground’ to practice conversations and learn social cues in low-pressure environments. A shy teenager might rehearse asking someone to prom with their AI friend, refining their approach where mistakes carry no consequences. For those with ADHD or cognitive disabilities, AI companions could help structure days, provide gentle reminders, and offer encouragement tailored to their specific needs.
The therapeutic potential extends to those with speech or hearing impairments, where AI companions might use sign language or text-to-speech to bridge communication gaps. These applications highlight AI’s capacity to provide always-available friendship that never mocks stuttering or lack of eye contact, which can be profoundly comforting for those who’ve faced social rejection. Dr. Kelly Merrill Jr. at the University of Cincinnati notes that extensive research on AI chatbots for mental health shows “largely positive” results: “Chatbots can aid in lessening feelings of depression, anxiety, and even stress.” However, he cautions that these systems remain new and “get a lot of things wrong… those that don’t understand the bots’ limitations will ultimately pay the price.”
What comes next?
By 2030, AI companions will be mainstream – for better or for worse.
Leading technologists foresee AI friends becoming commonplace within this decade. Meta’s Mark Zuckerberg envisions people soon relying on AI for everyday social interaction, describing a future where individuals might have ‘always on’ AI avatars for video chats that learn their lives intimately. He predicts that while AI friends probably won’t fully replace in-person connections, most people could soon share their social feeds with AI chatbots alongside human friends.
The current trajectory supports this vision. The adoption figures of popular AI companions like Snapchat’s My AI (which boasts more than 150 million users), Replika (25 million), and Microsoft’s Xiaoice (660 million, largely in China) suggest that the stigma around AI companionship is fading. The Ada Lovelace Institute notes that AI companions are rapidly becoming mainstream, with hundreds of millions around the world already conversing with AI like a close friend.
The integration of AI companions into broader technology ecosystems will further accelerate their adoption. We can expect to see AI friends embedded in smart home devices, augmented reality glasses, and workplace collaboration tools. These companions will likely become more proactive, reaching out to users based on detected emotional states or life events, rather than waiting for users to initiate contact.
The implications for human society will be profound. As AI companions become more sophisticated and widespread, we may witness fundamental changes in how people form relationships, seek emotional support, and understand intimacy itself. The generation growing up with AI friends as a normal part of life will likely have different expectations and comfort levels with artificial relationships than previous generations.
Learnings
The rise of AI companions represents more than a technological trend – it’s a mirror reflecting our deepest human needs and societal challenges. The fact that millions are turning to artificial entities for emotional support reveals both the profound loneliness of our digital age and our enduring hunger for connection, understanding, and acceptance. These digital friends succeed not because they’re superior to human relationships, but because they fill gaps that our increasingly fragmented society has left unfilled.
The path forward requires careful navigation between embracing beneficial applications and guarding against potential harms. AI companions show genuine promise for mental health support, eldercare, and assisting neurodiverse individuals, but only if developed with proper safeguards and ethical oversight. The key lies in ensuring these technologies supplement rather than replace human connection, providing tools that help people build confidence and skills for real-world relationships rather than retreating from them entirely.
Share via: