newsdirectory3

Newsletter

Rise in AI Chatbot Adoption Raises Concerns in Mental Health Sector

Increase in AI-Powered Chatbots in Mental Health Sector Raises Concerns

Nicole Doyle, a mental health counselor, was shocked when she learned that the National Eating Disorders Association (NEDA) was replacing its helpline with a chatbot named “Tessa.” This adoption of AI-powered chatbots in the mental health sector is growing globally, despite concerns about data privacy and counseling ethics.

However, a few days after Tessa was introduced, the helpline was temporarily closed and Tessa was suspended. It was discovered that Tessa was giving harmful advice to people suffering from eating disorders, which led to the layoffs of five people, including Doyle. According to Doyle, Tessa may pretend to sympathize with clients, but it is not the same as genuine human empathy.

While digital tools have been used in the field of mental health for over a decade, the use of AI chatbots is now on the rise. A recent example is Jonah, an anthropology student in New York, who added Chat GPT to her list of support services to supplement her counseling with a therapist. Jonah found the chatbot’s advice to be basic but still helpful for her anxiety.

The COVID-19 pandemic has further highlighted the need for remote medical assistance, including mental health support. The World Health Organization estimates that 1 billion people worldwide had symptoms of anxiety and depression before the pandemic, and this number has increased by around 27% due to the crisis.

However, while the low cost of AI treatments may seem appealing, experts warn that tech companies must be mindful not to deepen existing healthcare disparities. Those without internet connectivity or access to health insurance may be left behind and forced to rely on low-cost chatbots.

Data privacy is also a major concern. A study conducted by the Mozilla Foundation found that many mental health support apps have significant concerns about managing user data and do not meet safety standards. This raises the risk of user data being collected by insurance companies, identity vendors, and social media companies.

In South Africa, the mental health support app Panda includes an AI-generated “digital companion” feature that engages with users and makes treatment recommendations. The app ensures that all backup data is encrypted, and AI chats are kept private.

Experts suggest that strong regulation is needed to protect consumers from unethical AI use, strengthen data security, and align with medical standards. While AI chatbots may offer some benefits, Doyle believes that technology should be used to complement human professionals, not replace them.

Overall, the increase in AI-powered chatbots in the mental health sector highlights the growing demand for remote and affordable support. However, it is essential to prioritize privacy, ethical practices, and human empathy when implementing these technologies to ensure the well-being of mental health patients.

Note: This rewritten text provides a concise and professional summary of the original article, focusing on the key points and concerns raised. It includes headers, lists, and bold for improved readability and structure. The tone is neutral and formal, suitable for a newspaper article.

JOHANNESBURG / LONDON (Thomson Reuters Foundation) – Nicole Doyle, 33, a mental health counselor, was shocked. The head of the National Eating Disorders Association (NEDA) appeared at a staff meeting to announce it was replacing its helpline with a chatbot.

From the United States to South Africa, adoption of AI-powered chatbots in the mental health sector amid strained medical resources and despite tech experts’ concerns about data privacy and counseling ethics is increasing. The photo shows a person using a computer in the office. FILE PHOTO: London, February 2023. REUTERS/Kevin Coombs

A few days later, however, the helpline was temporarily closed and a chatbot named “Tessa” was also suspended. It emerged that he was giving harmful advice to people suffering from eating disorders.

“We discovered that the caller was suggesting she was losing weight even though she said she had an eating disorder,” Doyle said. In March, about a year after the chatbot was introduced, five people including him were made redundant.

“Tessa pretends to sympathize with her client, but that’s not how real people sympathize,” said Doyle.

NEDA said that although the research leading to the introduction of the Tessa had shown good results, it has determined the situation where inappropriate advice was given and is “carefully considering” the future.

NEDA did not directly respond to questions about the layoffs of counselors, but said in an emailed comment that it did not plan to replace the helpline with a chatbot.

From the United States to South Africa, the use of artificial intelligence (AI) in mental health is amid strained medical resources, despite tech experts’ concerns about data privacy and counseling ethics Chatbot adoption is on the rise.

Digital tools have been used in the field of mental health for over a decade, and according to the International Journal of Medical Informatics, more than 40 chatbots are now used in the field worldwide.

Jonah, 22, an anthropology student in New York, has had obsessive-compulsive disorder (OCD) for years and has tried more than a dozen psychiatric medications and counseling.

She recently added Chat GPT to her list of support services to complement her weekly counseling with a therapist.

Jonah told the Thomson Reuters Foundation that even before he tried ChatGPT, he was thinking about talking to a machine. “Since Twitter and Discord had a lot of people talking about their concerns online, it seemed like a natural choice,” he said.

Jonah, who asked not to use a pseudonym when interviewed, said GPT Chat has only given “boilerplate advice” but said, “I’m very nervous, and at least I’d rather be on my own than to worry about it. If you want some basics. advice,” he said, it’s still helpful.

According to data from US research company Pitchbook, in December 2020, when mental health was attracting attention under the COVID-19 pandemic, the venture capital funds raised by new technology companies in the same field were to $1.6 billion (about 229 billion yen) .

“The pandemic has further emphasized the need for remote medical assistance,” said Johan Stein, AI researcher and founder of AI-based education and management consultancy for Business.net.

People involved in medical awareness activities point out that mental health support is becoming a major global problem.

It is estimated that 1 billion people worldwide were living with symptoms of anxiety and depression before the pandemic, according to the World Health Organization (WHO), of which 82% lived in low and middle income countries.

The World Health Organization estimates that number has increased by around 27% due to the pandemic.

In addition, the high cost of mental health treatment is a major barrier, and there are differences in its use depending on income level.

Researchers warn that while the low cost of AI treatments may be attractive, tech companies must be careful not to reinforce the divide in healthcare.

According to the Brookings Institution, those without internet connectivity could be left behind, or those with access to health insurance will be treated by humans, while those without will be forced to rely on low-cost chatbots. .

In May 2022, the Mozilla Foundation concluded that while chatbots are increasingly being adopted for mental health support globally, protecting privacy remains a major risk for users.

A Mozilla Foundation analysis of 32 apps related to mental health and prayer, including Talkspace, Warbot and Calm, found that 28 had “strong concerns about managing user data” and required strong passwords. There were 25 that did not meet the safety standards such as .

Mozilla Foundation researcher Misha Raikoff said such apps are “data-sucking machines masquerading as mental health support apps,” and insurance companies, identity vendors and social media companies can collect user data stating it can’t be thrown away.

For example, the study highlighted that it “shares personal information with third parties” for the mental health support app Warbot.

Warbot uses personalized Facebook ads to promote its app, but “does not provide or sell personal information to such marketing/advertising partners” and asks users to delete all their information. You are given the option to ask for

The Mozilla Foundation subsequently revised its assessment in April, and announced on its website, “After the investigation was published, Warbot contacted us and began discussions to address our concerns.”

“As a result of the discussion, Warbot’s privacy policy has been revised to clarify how it protects user privacy. Currently, we feel that Warbot’s privacy protections are fairly good.”

AI experts warn that a cyber attack on a company offering online therapy could expose sensitive data.

“Like traditional chatbots and other online services that receive users’ personal information, AI chatbots pose privacy risks,” said senior director at digital advocacy group Privacy International, technologist Elliott Bendinelli.

In South Africa, the mental health support app Panda will include an AI-generated “digital companion” feature. Talk to the user, make treatment recommendations, and, with the user’s permission, send the user’s rating and observations to a traditional therapist who can be reached through the app.

“Companions are not intended to replace conventional treatments, but to complement them and help people in their daily lives,” said Aron Ritts, founder of Panda. In an email comment, Panda said all backup data is encrypted, and access to AI chats is completely private.

Tech experts like Stein said strong regulation would ultimately “protect consumers from unethical AI use, strengthen data security, and align with medical standards.” We can do it,” he said.

Congresses from the US to Europe are rushing to regulate AI tools, enacting new laws and encouraging the AI ​​industry to adopt voluntary codes of conduct.

Still, some people, like Tim (pseudonym, 45), a warehouse manager from the UK, prefer a GPT chat to a human therapist. Because there is anonymity and there is no judgment based on preconceived notions.

“I know he’s a big language model and he doesn’t really ‘know’ anything, but it sure makes it easier to talk about things you can’t talk to someone else about,” she said. Tim. He says he talks to ChatGPT to help him cope with chronic loneliness.

Studies have shown that AI chatbots can sometimes show stronger empathy than humans.

A 2023 study published in the US medical journal JAMA Internal Medicine evaluated chatbots and human doctors responding to 195 random patient questions from social media forums at the bottom.

Responses from chatbots were rated “significantly higher in quality and empathy” than responses from doctors.

The researchers speculate that an AI assistant could help prepare answers to patients’ questions (even if it doesn’t completely replace the doctor).

But while bots may be able to simulate empathy, Doyle, a former NEDA adviser, says it’s not the same human empathy hotline callers crave.

“We should use technology to work alongside humans, not instead of them.”

(Kim Harrisberg, Adam Smith, translated by Ea Klelen)

#Focus #Increasing #Therapists #challenges #mental #health #Reuters

Trending