she also notes that the ethical ramifications stem from chatgpt’s lack of regulation. it has not been tested nor approved to act as a therapy tool, and thus, using it could lead to harmful outcomes.
“there have been some really tragic cases of people have really negative outcomes,” said dr. gold. “there was this case that someone reported on, someone having a suicide, a child, because of some dialogue allegedly that was on chatgpt.”
the case she’s referring to is that of
16-year-old adam raine, a young man who turned to chatgpt to work through anxiety and ended up taking his own life, with his parents alleging that the program coached him on how to plan and follow through with his suicide.
“it can be really dangerous,” said dr. gold.
refinement can help, but it will never replace traditional care
while chatgpt, as it stands today, is not a viable source for therapy, that doesn’t mean that people should throw the baby out with the bathwater. ai solutions have their upsides, even if not specifically chatgpt, and with refinement, they could become a valuable tool in the future for addressing some of the challenges faced by people living with mental health conditions, such as access to care.