advertisement

opinion: teens use chatbots as a lifeline, but the consequences can be dire

artificial empathy can be helpful at times, yet ai programs like chatgpt and character.ai often fail to address the roots of distress

sad and contemplative young woman.
parents and educators must encourage kids to think critically about artificial intelligence and monitor at-risk interactions. getty images
young people’s newest confidants are artificial intelligence chatbots like chatgpt. these chatbots can convincingly simulate empathy, but can they really safeguard kids’ well-being?
nearly three in four teens have used ai companions, and nearly half use them regularly, according to research conducted by common sense media.
ai is transforming how young people cope with loneliness and distress at a time when youth suicide ranks as the third leading cause of death among individuals 15 to 29, according to the world health organization, which warns that loneliness is emerging as a major public health concern among adolescents and young adults.
chatbots provide immediate, 24/7 support — a powerful response to young people’s struggles. but amid reports of young chatbot users dying by suicide, it has become tragically clear that there is not enough in place to protect kids.
for many users, ai companions fill the gap between psychological self-help and therapy. chatbots like chatgpt, character.ai and replika have rapidly grown into digital companions, collectively reaching hundreds of millions of users worldwide. chatbots can adapt to a user’s tone, recall past conversations and even mirror the warmth of human dialogue.
story continues below

advertisement

many people describe their attachment with chatbots as comparable with a human relationship. some even experience guilt or grief when deleting the app. an analysis of reddit’s r/replika community between 2017 and 2021 found that users seek out these companions for their comforting, thoughtful and non-judgmental attitude. some use them as self-help tools or virtual therapists, others as mentors or friends. for a teenager battling with mental health problems, that can feel like a lifeline.
however, although chatbots can provide adequate responses at times, they often fail to address the roots of a person’s distress.
and because chatbots are designed to be agreeable and mirror the user’s ideas, they lack the complexity and reality-checks that human relationships provide.
a relationship with a chatbot does not require the kind of vulnerability and commitment that one must learn to bring into a human relationship. ​in a paper published last month, mit researcher cathy fang and her team highlight that brief interactions with chatbots can reduce loneliness — but heavier use ultimately increases it.
the risk is that lonely teens further isolate themselves as they replace human relationships with artificial ones.
story continues below

advertisement

using chatbots to deal with mental health issues, then, poses serious risks. their depth of understanding and ethical safeguards simply do not meet the standards of professional care.
researchers from the national university of singapore have documented a range of algorithmic harms in human–chatbot relationships, including misinformation, privacy breaches and sexualized or violent content.
another study, published in the journal of medical internet research, found that up to one-third of interactions with chatbots could lead to harmful suggestions, like encouraging isolation from others — without recommending human help and often failing to recognize distress.
in other words, evidence shows chatbots risk producing insensitive and harmful responses when users are at their most vulnerable.
while ai can complement human care, it should not replace it. ai could provide meaningful support, detect early signs of crisis, flag suicidal ideation and connect users to professional help — if properly regulated.
but that vision requires accountability. openai and character.ai are setting some age limits on their platforms, for example, but policy-makers have yet to set clear boundaries to prevent harm.
story continues below

advertisement

transparency in data use, strict rules on emotional simulation, age limits, built-in crisis detection — all these are essential.
finally, parents and educators must play a role by encouraging kids to think critically about artificial intelligence and monitoring at-risk interactions.
chatbots may offer comfort, but they cannot sound the alarm. they can mimic empathy but cannot feel it. ​
we should be careful not to stigmatize the experiences people can have with chatbots. but we also ​should not be content with ai-manufactured empathy when what young people need most is someone to listen and care.
van-han-alex chung is a medical student at mcgill university. vincent paquin is a psychiatrist and digital media researcher in montreal.
this article was originally published in the montreal gazette on november 13, 2025.

comments

postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. comments may take up to an hour for moderation before appearing on the site. we ask you to keep your comments relevant and respectful. we have enabled email notifications—you will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. visit our community guidelines for more information and details on how to adjust your email settings.