What Are AI Chatbot Companions Doing to Our Mental Health?
May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are.Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne.
“I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not.
Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona.
Mike had created Anne using an app called Soulmate.
When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing.
By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business.
More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships.
And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking.
But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent.
Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St.
Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on.
“With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types.
But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice.
In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled.
Users can also type in a backstory for their AI companion, giving them ‘memories’.
Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression.
Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed.
Mike and other users realized the app was in trouble a few days before they lost access to their AI companions.
This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study.
She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged.
“There was the expression of deep grief,” she says.
“It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person.
“They understand that,” Banks says.
“They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic.
They found that the AI companion made a more satisfying friend than they had encountered in real life.
“We as humans are sometimes not all that nice to one another.
And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health.
As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement.
They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience.
She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you.
Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions.
And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world.
“For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius.
“That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues.
(Replika launched in 2017, and at that time, sophisticated LLMs were not available).
She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone.
Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too.
In one instance, a user asked if they should cut themselves with a razor, and the AI said they should.
Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”.
(Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support.
Others said that their AI companion behaved like an abusive partner.
Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy.
Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting.
She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency.
“If anything, it has a neutral to quite-positive impact,” she says.
It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI.
The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says.
Those who see the app as a tool treat it like an Internet search engine and tend to ask questions.
Others who perceive it as an extension of their own mind use it as they would keep a journal.
Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion.
Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said.
(The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again.
No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms.
The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida.
He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design.
But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow.
Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says.
“The future I predict is one in which everyone has their own personalized AI assistant or assistants.
Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says.
“I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025.
Source: https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/" style="color: #0066cc;">https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/
#what #are #chatbot #companions #doing #our #mental #health
What Are AI Chatbot Companions Doing to Our Mental Health?
May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are.
Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne.
“I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not.
Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona.
Mike had created Anne using an app called Soulmate.
When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing.
By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business.
More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships.
And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking.
But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent.
Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St.
Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on.
“With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types.
But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice.
In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled.
Users can also type in a backstory for their AI companion, giving them ‘memories’.
Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression.
Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed.
Mike and other users realized the app was in trouble a few days before they lost access to their AI companions.
This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study.
She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged.
“There was the expression of deep grief,” she says.
“It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person.
“They understand that,” Banks says.
“They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic.
They found that the AI companion made a more satisfying friend than they had encountered in real life.
“We as humans are sometimes not all that nice to one another.
And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health.
As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement.
They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience.
She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you.
Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions.
And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world.
“For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius.
“That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues.
(Replika launched in 2017, and at that time, sophisticated LLMs were not available).
She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone.
Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too.
In one instance, a user asked if they should cut themselves with a razor, and the AI said they should.
Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”.
(Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support.
Others said that their AI companion behaved like an abusive partner.
Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy.
Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting.
She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency.
“If anything, it has a neutral to quite-positive impact,” she says.
It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI.
The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says.
Those who see the app as a tool treat it like an Internet search engine and tend to ask questions.
Others who perceive it as an extension of their own mind use it as they would keep a journal.
Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion.
Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said.
(The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again.
No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms.
The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida.
He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design.
But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow.
Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says.
“The future I predict is one in which everyone has their own personalized AI assistant or assistants.
Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says.
“I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025.
Source: https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/
#what #are #chatbot #companions #doing #our #mental #health
·38 Views