advertisement

chatgpt is no therapist: how using ai in place of traditional therapy can be harmful

at first glance, being able to openly chat with an ai assistant may seem helpful, but there are “a lot of red flags"

chatgpt cannot act as a therapist for many reasons, both ethical and legal. getty images
this article contains mention of suicide and may be sensitive for some readers. if you or someone you know is struggling with mental health issues, depression or suicidal thoughts, please reach out to a trusted person or professional.
as many as one in five canadians will be diagnosed with a mental health condition in their lifetimes, typically by the age of 25. many of those conditions benefit from therapy, but wait times in the country can exceed five months or more, leaving people to either wait for the care they need or turn to other avenues.
one such tool that people have been using as a substitute for traditional therapy is chatgpt, an artificial intelligence assistant that can converse with people on a variety of topics, including their own mental health struggles.
while the actual number of people using the service in this way isn’t well documented, since it’s a new phenomenon, dr. alexandra gold, licensed clinical psychologist and member of the faculty at harvard medical school and massachusetts general hospital, notes that “a lot of people are turning to chatgpt” and “it’s becoming a problem.”

how chatgpt therapy works and why people use it

using an ai tool as a therapist doesn’t exactly provide people with care, but it can make them feel as though they have someone to talk to when they aren’t able to access a professional or can’t openly talk about certain things with their family and friends.
story continues below

advertisement

for example, if someone is struggling with anxiety, chatgpt may suggest various coping mechanisms by searching its database for intel and sending it back to the person, much like a friend would. but it’s not just anxiety or other small issues that can be explored through chatgpt.
since it’s also so easy for anyone to log on at any point of the day, more and more people are getting over the wait time hurdle simply by logging on.
“i think that’s why people turn to it because it’s like, 11 p.m., and you’re feeling sad. you can go talk to chatgpt. you can’t really call your therapist unless it’s a crisis,” said dr. gold.
it’s also about the responses they get, which tend to make people feel like they finally have someone on their side.
“we know that these ai models can be really compassionate,” said dr. gold. “some of the research that has come out at this point has spoken to the connection and the support as being one of the major factors that drives people to chatgpt.”
she also notes that it’s easy to use because people can type in any issues they are facing and await responses or suggestions from the chatbot that help them work through their problems. it all sounds helpful and can give people feelings of emotional support, validation, and understanding. but there are “a lot of red flags.”
story continues below

advertisement

dark side of chatgpt-based ai therapy

at first glance, being able to openly chat with an ai assistant may seem helpful, but there are “a lot of red flags,” as dr. gold notes, especially if people are choosing to use the service as opposed to finding traditional care with a licensed medical professional.
“first of all, there’s no ethical standards that chat gpt is adhering to,” she said. “we don’t know what they’re doing with the data … they’re not beholden to any regulations or boards.”
this is backed up by recent comments from openai ceo sam altman himself. he stated on the this past weekend podcast by theo von that the chats people engage in with these systems are not private or protected legally. so, anything that is said to chatgpt can and may be accessible to law enforcement or other entities because it’s not confidential.
the chatbot is also not well-versed in differentiating between validation and self-reflection, in the sense that it may provide a person with a yes answer when they may require a no, or even something more complex to be validating, while also providing people with actionable ways to assess their behaviours, thoughts and feelings and change them.
“validation is a huge part of therapy, but part of therapy is also helping people engage in self-reflection and not just be a ‘yes’ person,” said dr. gold.
story continues below

advertisement

she also notes that the ethical ramifications stem from chatgpt’s lack of regulation. it has not been tested nor approved to act as a therapy tool, and thus, using it could lead to harmful outcomes.
“there have been some really tragic cases of people have really negative outcomes,” said dr. gold. “there was this case that someone reported on, someone having a suicide, a child, because of some dialogue allegedly that was on chatgpt.”
the case she’s referring to is that of 16-year-old adam raine, a young man who turned to chatgpt to work through anxiety and ended up taking his own life, with his parents alleging that the program coached him on how to plan and follow through with his suicide.
“it can be really dangerous,” said dr. gold.

refinement can help, but it will never replace traditional care

while chatgpt, as it stands today, is not a viable source for therapy, that doesn’t mean that people should throw the baby out with the bathwater. ai solutions have their upsides, even if not specifically chatgpt, and with refinement, they could become a valuable tool in the future for addressing some of the challenges faced by people living with mental health conditions, such as access to care.
story continues below

advertisement

“i think accessibility is huge. if you could have a platform where it’s accessible at all times and whenever someone needs support, then that goes a long way,” said dr. gold.
dr. gold notes that medical professionals have already begun testing various ai applications to see how they could be a supplemental tool in mental health interventions. current research indicates that ai can be a tool but not a substitute for professional care.
“there’s been some research where it has been studied in more test cases, like hospital settings, clinics, that have created language learning models, validated and tested by psychiatrists,” she said. “they thought really carefully about how to handle risk cases like suicide type cases or suicidal thoughts, and that really worked because it wasn’t looked at as a replacement for therapy.”
she also mentions that having an ai assistant to work through some less severe mental health struggles can benefit the system and its patients. this is because it can ultimately free up therapists’ time for more severe cases if those who are able to successfully use the tool as an aid do not require more in-depth sessions with a psychotherapist.
“(it’s) sort of like lightening the load in terms of people who need therapy versus people who need help now, like someone who can use help in their day-to-day life and then someone who is dangerously close to being sick to the point where they are hospitalized or worse,” she said. “the latter would be someone who benefits more from a higher level of care.”
story continues below

advertisement

cost is another factor that plays into care, and with an ai tool, those who cannot afford to see a traditional therapist may lean more toward ai models as a way to offset the financial burden of it all.
“we’re in a tough economy right now. i think people want to cut costs if they can, and it makes complete sense, and it’s hard to get good care,” said dr. gold.
at the end of the day, chatgpt is no therapist, and even when ai models do become better equipped to handle mental health conditions and distress in beneficial ways, they will never be a replacement for the real thing.
dr. gold notes that she would “never, ever recommend chatgpt as a therapist,” but the fact that people are turning to it plays to a different issue altogether.
“it speaks to a problem in our mental healthcare where people feel like they either can’t afford or can’t get the right care, so i think there’s work for the clinicians and the access to care side to be working on to make it so people don’t feel like they have to turn to chatgpt.”
angelica bottaro
angelica bottaro

angelica bottaro is the lead editor at healthing.ca, and has been content writing for over a decade, specializing in all things health. her goal as a health journalist is to bring awareness and information to people that they can use as an additional tool toward their own optimal health.

read more about the author

comments

postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. comments may take up to an hour for moderation before appearing on the site. we ask you to keep your comments relevant and respectful. we have enabled email notifications—you will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. visit our community guidelines for more information and details on how to adjust your email settings.