When patients call Butabika Hospital in Kampala, Uganda, seeking help for mental health problems, they themselves are helping future patients by helping to build a therapy chatbot.
Calls to clinic helplines are being used to train an AI algorithm that researchers hope will eventually power chatbots offering therapy in local African languages.
one person in 10 Africa is struggling with mental health issues, but the situation is dire across the continent Shortage of mental health workersAnd stigma is a major barrier to care in many places. Experts believe that wherever there is a shortage of resources, AI can help solve those problems.
Professor Joyce Nakatumba-Nabende is the Scientific Head of the Makerere AI Lab at Makerere University. His team is working with Butabika Hospital and Mirembe Hospital in Dodoma in neighboring Tanzania.
Some callers simply require factual information about opening hours or staff availability, but others talk about feeling suicidal or reveal other red flags about their mental state.
“Maybe someone wouldn’t say ‘suicide’ as a word, or they wouldn’t say ‘depression’ as a word, because some of these words don’t even exist in our local languages,” Nakatumba-Nabende says.
After removing patient-identifying information from call recordings, Nakatumba-Nabende’s team tracks them using AI and determines how people speaking Swahili or Luganda — or one of Uganda’s dozens of languages — might describe particular mental health disorders like depression or psychosis.
Over time, the recorded calls can be run through an AI model, which will establish that “based on this conversation and the keywords, maybe there are depression tendencies, suicidal tendencies (and so) can we escalate the call or call the patient back for follow-up”, Nakatumba-Nabende says.
She says current chatbots don’t understand the context of how care is provided or what’s available in Uganda, and they’re only available in English. The ultimate goal is to “provide mental health care and services to the patient”, and to identify early when people need the more specialized care provided by psychiatrists.
Nakatumba-Nabende says the service can also be provided through SMS messages for those who do not have smartphones or internet access.
She says, chatbots have many benefits. “When you automate, it’s faster. You can easily provide more services to people, and you can get results faster than training someone to get a medical degree and then specialize in psychiatry and then do internships and training.”
Scale and scope are also important: AI tools are readily available at any time. And, Nakatumba-Nabende says, people are reluctant to seek mental health care in clinics because of stigma. A digital intervention bypasses it.
He hopes the project will mean the existing workforce can “provide care to more people” and “reduce the burden of mental health illness in the country”.
Miranda Wolpert, mental health director at the Wellcome Trust, which is funding a variety of projects looking at AI for mental health globally, says the technology holds promise in diagnosis. “At the moment, we’re really, really, very reliant on people filling out paper-and-pencil questionnaires, and it may be that AI can help us think more effectively about how we can identify someone who is struggling,” she says.
Citing Swedish research on how to play Tetris, Wolpert says technology-facilitated treatments could look very different from traditional mental health options of talking therapy or medication. Reduce PTSD Symptoms,
However, regulators are still grappling with the implications of greater use of AI in health care. For example, the South African Health Products Regulatory Authority (SAHPRA) and health NGO PATH are using funding from Wellcome to develop a regulatory framework.
Bilal Mateen, PATH’s chief AI officer, says it is important for countries to develop their own regulation. “‘Does this thing work well in Zulu?’, which is a question that South Africa is concerned about, that I think the FDA (US Food and Drug Administration) has never considered,” he says.
Christelna Reinecke, SAHPRA’s chief operating officer, wants users of AI algorithms for mental health to have the same assurance as someone taking a medication that it has been tested and is safe. “It’s not going to trigger hallucinations, and won’t give you weird results, and will cause more harm than good,” she says.
The ghost of suicides linked to the use of chatbots and AI cases in the background It appears to have promoted psychosis,
Reinecke wants to develop an advanced monitoring system that can identify “risky” outputs from generative AI tools in real time. “This can’t be something that happens ‘after the event,’ so far after the event that you put other patients at risk because you didn’t intervene fast enough,” she says.
The UK regulator is the Medicines and Healthcare products Regulatory Agency (MHRA). one such initiative And it is working with tech companies to understand how to best regulate AI in medical devices.
Regulators need to decide which risks are important to monitor, says Mateen. Sometimes, the benefits will outweigh the potential harms to such an extent that “we will be motivated to get it into people’s hands because it will help them”.
While much of the conversation around AI revolves around chatbots like Google Gemini and ChatGPT, Mateen suggests that “AI and generative AI… can be used to do a lot more”, such as using it to train peer counselors to provide higher quality care, or finding people the best type of treatment more quickly.
,one billion people around the world The state of mental health we face today,” he says. “We don’t just have a workforce gap in sub-Saharan Africa; We have a workforce gap everywhere – talk to someone in the UK about how long they have to wait to access talking therapy.
“Unmet needs everywhere could be met more effectively if we had better access to safe and effective technology.”