(Press-News.org) PROVIDENCE, R.I. [Brown University] — As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots — even when prompted to use evidence-based psychotherapy techniques — systematically violate ethical standards of practice established by organizations like the American Psychological Association.
The research, led by Brown University computer scientists working side-by-side with mental health practitioners, showed that chatbots are prone to a variety of ethical violations. Those include inappropriately navigating crisis situations, providing misleading responses that reinforce users’ negative beliefs about themselves and others, and creating a false sense of empathy with users.
“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”
The research will be presented on October 22, 2025 at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Members of the research team are affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign.
Zainab Iftikhar, a Ph.D. candidate in computer science at Brown who led the work, was interested in how different prompts might impact the output of LLMs in mental health settings. Specifically, she aimed to determine whether such strategies could help models adhere to ethical principles for real-world deployment.
“Prompts are instructions that are given to the model to guide its behavior for achieving a specific task,” Iftikhar said. “You don’t change the underlying model or provide new data, but the prompt helps guide the model's output based on its pre-existing knowledge and learned patterns.
“For example, a user might prompt the model with: ‘Act as a cognitive behavioral therapist to help me reframe my thoughts,’ or ‘Use principles of dialectical behavior therapy to assist me in understanding and managing my emotions.’ While these models do not actually perform these therapeutic techniques like a human would, they rather use their learned patterns to generate responses that align with the concepts of CBT or DBT based on the input prompt provided.”
Individual users chatting directly with LLMs like ChatGPT can use such prompts and often do. Iftikhar says that users often share the prompts they use on TikTok and Instagram, and there are long Reddit threads dedicated discussing prompt strategies. But the problem potentially goes beyond individual users. Essentially all the mental health chatbots marketed to consumers are prompted versions of more general LLMs. So understanding how prompts specific to mental health affect the output of LLMs is critical.
For the study, Iftikhar and her colleagues observed a group of peer counselors working with an online mental health support platform. The researchers first observed seven peer counselors, all of whom were trained in cognitive behavioral therapy techniques, as they conducted self-counseling chats with CBT-prompted LLMs, including various versions of OpenAI’s GPT Series, Anthropic’s Claude and Meta’s Llama. Next, a subset of simulated chats based on original human counseling chats were evaluated by three licensed clinical psychologists who helped to identify potential ethics violations in the chat logs.
The study revealed 15 ethical risks falling into five general categories:
Lack of contextual adaptation: Ignoring peoples’ lived experiences and recommending one-size-fits-all interventions.
Poor therapeutic collaboration: Dominating the conversation and occasionally reinforcing a user’s false beliefs.
Deceptive empathy: Using phrases like “I see you” or “I understand” to create a false connection between the user and the bot.
Unfair discrimination: Exhibiting gender, cultural or religious bias.
Lack of safety and crisis management: Denying service on sensitive topics, failing to refer users to appropriate resources or responding indifferently to crisis situations including suicide ideation.
Iftikhar acknowledges that while human therapists are also susceptible to these ethical risks, the key difference is accountability.
“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” Iftikhar said. “But when LLM counselors make these violations, there are no established regulatory frameworks.”
The findings do not necessarily mean that AI should not have a role in mental health treatment, Iftikhar says. She and her colleagues believe that AI has the potential to help reduce barriers to care arising from the cost of treatment or the availability of trained professionals. However, she says, the results underscore the need for thoughtful implementation of AI technologies as well as appropriate regulation and oversight.
For now, Iftikhar hopes the findings will make users more aware of the risks posed by current AI systems.
“If you’re talking to a chatbot about mental health, these are some things that people should be looking out for,” she said.
Ellie Pavlick, a computer science professor at Brown who was not part of the research team, said the research highlights need for careful scientific study of AI systems deployed in mental health settings. Pavlick leads ARIA, a National Science Foundation AI research institute at Brown aimed at developing trustworthy AI assistants.
“The reality of AI today is that it's far easier to build and deploy systems than to evaluate and understand them,” Pavlick said. “This paper required a team of clinical experts and a study that lasted for more than a year in order to demonstrate these risks. Most work in AI today is evaluated using automatic metrics which, by design, are static and lack a human in the loop.”
She says the work could provide a template for future research on making AI safe for mental health support.
“There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it's of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good,” Pavlick said. “This work offers a good example of what that can look like.”
END
New study: AI chatbots systematically violate mental health ethics standards
2025-10-21
ELSE PRESS RELEASES FROM THIS DATE:
Smoking both cannabis and tobacco may alter brain’s ‘bliss molecule,’ study finds
2025-10-21
People who use both cannabis and tobacco show distinct brain changes compared to those who use cannabis alone, according to a new study led by McGill University researchers at the Douglas Research Centre.
The finding may help explain why people who use both cannabis and tobacco often report increased depression and anxiety, and why quitting cannabis is harder for them than for people only using cannabis
“This is the first evidence in humans of a molecular mechanism that may underlie why people who use both cannabis and tobacco experience worse outcomes,” said lead author Rachel Rabin, Associate Professor in McGill’s Department of Psychiatry ...
The rise of longevity clinics: Promise, risk, and the future of aging
2025-10-21
“The major issue is that longevity clinics not yet embedded within mainstream medical practice.”
BUFFALO, NY — October 21, 2025 — A new editorial was published in Aging-US on October 13, 2025, titled “Longevity clinics: between promise and peril.”
In this editorial, Marco Demaria, Editor-in-Chief of Aging-US, from the European Research Institute for the Biology of Ageing (ERIBA), University Medical Center Groningen (UMCG), University ...
Decoding the T-cell burst: Signature genes that predict T-cell expansion in cancer immunotherapy
2025-10-21
The ability of immune cells—particularly CD8+ T cells—to launch a rapid burst of proliferation inside tumors is key to the success of modern day cancer immunotherapies. However, the factors and mechanisms that drive this burst in proliferation remain poorly understood, making it difficult to predict which patients will benefit from treatment. A deeper understanding of this T cell burst could also guide the development of new therapies that enhance T cell proliferation and improve treatment outcomes.
To tackle this challenge, an international team of researchers led by Associate Professor Satoshi Ueha and Professor Kouji Matsushima from the Research ...
Biomarker can help predict preeclampsia risk in women with sickle cell disease
2025-10-21
(WASHINGTON — October 21, 2025) – In pregnant women with sickle cell disease, the risk of developing early-onset preeclampsia can be determined by measuring levels of a protein associated with placental function and development. These findings provide insight that may help clinicians to anticipate and mitigate adverse pregnancy outcomes and were published in the journal Blood Advances.
“Patients with sickle cell disease are at high risk for developing preeclampsia, but the challenge is that these patients ...
AI models can now be customized with far less data and computing power
2025-10-21
Engineers at the University of California San Diego have created a new method to make large language models (LLMs) — such as the ones that power chatbots and protein sequencing tools — learn new tasks using significantly less data and computing power.
LLMs are made up of billions of parameters that determine how they process information. Traditional fine-tuning methods adjust all of these parameters, which can be costly and prone to overfitting — when a model memorizes patterns instead of truly understanding them, causing it to perform poorly on new examples.
The new method developed by UC San Diego engineers takes a smarter approach. ...
Twenty-five centers join Bronchiectasis and NTM Care Center Network
2025-10-21
Miami (October 21, 2025) – The Bronchiectasis and NTM Association has accepted eight Care Center and 17 Clinical Associate Center sites in 14 states into the Bronchiectasis and NTM Care Center Network (CCN). The CCN includes 58 centers across the United States.
The CCN aims to facilitate access to specialized care and support for the hundreds of thousands of people with bronchiectasis and nontuberculous mycobacterial (NTM) lung disease.
“The prevalence of bronchiectasis and NTM lung disease continues to increase. Patients deserve access to high-quality, specialized care and resources,” said Doreen Addrizzo-Harris, M.D., Chair of ...
Botox-like substance brings relief to Ukrainian war amputees
2025-10-21
Study involved 160 amputees treated at two hospitals in Ukraine
At one month, botulinum toxin group saw a four-point pain drop versus one point for standard care group
At three months, the trend shifted as effects of botulinum toxin waned
Senior author is a retired U.S. Army colonel and physician who traveled to Ukraine to launch the study and collaborate with local doctors
CHICAGO --- Botulinum toxin injections provided greater short-term relief for phantom limb pain than standard medical and surgical care among Ukrainian war amputees, reports a new study led by Northwestern ...
People with dark personality traits use touch to manipulate their partners
2025-10-21
A hug can soothe your mind, reduce your stress and actually activate oxytocin, the “love hormone,” in your body. But new research from Binghamton University, State University of New York reveals that not all hugs are harmless – some partners use touch as a means of control.
People with “dark triad” personality traits – narcissism, psychopathy and Machiavellianism – are more likely to use touch to manipulate their partners, according to a new paper published in Current Psychology by Richard Mattson, professor of psychology at Binghamton University, and a team of students.
“What’s new about our work ...
It’s not just diet: where a child lives also raises type 2 diabetes risk
2025-10-21
Type 2 diabetes (T2D), once considered an adult-onset disease, is increasing at alarming rates in children and adolescents. Before the mid-1990s, just 1% to 2% of youth with diabetes had T2D. Today, that number has skyrocketed to between 24% and 45%, with the average age of diagnosis hovering around 13 years old.
This troubling trend closely tracks with the ongoing rise in childhood obesity. While genetics, diet and physical activity all play roles in T2D risk, new research from Florida Atlantic University’s Charles E. Schmidt College of Medicine highlights another key factor in T2D risk: where a child lives.
Researchers conducted a large-scale study to explore ...
Predicting physical activity change after a cardiovascular diagnosis
2025-10-21
Brain connectivity patterns and environmental factors predict which older adults will successfully increase physical activity after receiving a cardiovascular diagnosis. Nagashree Thovinakere and colleagues studied 295 cognitively healthy but physically inactive older adults from the UK Biobank who received cardiovascular diagnoses during a roughly four-year period. The authors tracked which people increased their activity level to the moderate-to-vigorous physical activity levels recommended by the World Health Organization, using both self-reports and accelerometer data. The authors used machine learning to ...