A new study from Brown University raises concerns about the use of AI chatbots like ChatGPT for mental health support. Research presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society has identified 15 key risks associated with AI therapy.
The study concluded that AI chatbots do not adapt to the individual context, do not provide general advice, or respond inadequately to individual crises. The risks identified included lack of contextual adaptation, poor clinical support, misleading empathy, inappropriate discrimination and poor safety management.
The use of AI can be misleading in some cases, such as the use of empathetic phrases such as “I understand”, which do not imply that they have any real feelings. It can also perpetuate false beliefs, inadequate support in crises such as suicidal tendencies, etc.
Another Ph.D. Zainab Iftikhar, a student at Brown University, explained that the main reason is that humans are accountable to professional associations, but artificial intelligence is not. “When LLM consultants make mistakes, there is no regulatory framework to hold them accountable,” he said.
Eli Pavlik, another professor at Brown University who teaches computer science, said the evaluation process for artificial intelligence is not properly peer-reviewed. Pavlik said, “It is much easier to build and deploy a system than to fully understand it. Careful criticism is necessary to avoid doing more harm than good.”
Experts are saying that using chatbots for therapy can be dangerous and can cause harm if proper precautions are not taken.
