IIn Southern California, where homelessness rates are the highest in the country, a private company, Akido LabsRuns clinics for homeless patients and other low-income people. alert? Patients are seen by medical assistants who use artificial intelligence (AI) to listen to conversations, then explain possible diagnoses and treatment plans, which are reviewed by a doctor. The company’s goal, its chief technology officer told MIT Technology Review, is “take the doctor out of the tour“.
This is dangerous. Yet it’s part of a larger trend where generic AI is being pushed into health care for medical professionals. In 2025, a survey The American Medical Association reported that two out of three physicians use AI to assist in their daily tasks, including diagnosing patients. One AI startup raises $200 million Providing an app called “ChatGPT for Doctors” to medical professionals. US lawmakers are considering this Bill This will identify the AI as capable of prescribing medication. While this trend of AI in health care affects almost all patients, it has a deeper impact on low-income people who already face substantial barriers to care and higher rates of abuse in health care settings. People who are unhoused and have low incomes should not be used as a testing ground for AI in health care. Instead, their voice and preferences should drive whether, how, and when AI is implemented in their care.
The rise of AI in healthcare didn’t happen just like that. Overcrowded hospitals, overworked physicians and the constant pressure to keep medical offices running uninterrupted, shuttling patients in and out of the for-profit health care system set the conditions. The demand for health care workers often increases in economically disadvantaged communities, where health care settings are often under-resourced and patients are uninsured, where there is a higher burden of chronic health conditions due to racism and poverty.
This is where one might ask, “Isn’t something better than nothing?” Well, actually, no. Studies show that AI-enabled tools produce misdiagnosis. A 2021 study Nature Medicine examined AI algorithms trained on large, chest X-ray datasets for medical imaging research and found that these algorithms systematically underdiagnosed Black and Latinx patients, patients recorded as female, and patients with Medicaid insurance. This systematic bias risks deepening health disparities for patients already facing barriers to care. one more StudyPublished in 2024, found that AI led to misdiagnosis of breast cancer screening among black patients – black patients screened for breast cancer were more likely to receive false positives than their white counterparts. Due to algorithmic bias, some clinical AI tools have performed extremely poorly on Black patients and other people of color. This is because AI is not “thinking” independently; It relies on probabilities and pattern recognition, which can reinforce bias for already marginalized patients.
Some patients are not even informed that their health provider or healthcare system is using AI. a medical assistant Reported in MIT Technology Review His patients know that the AI system is listening, but he doesn’t tell them that it makes clinical recommendations. It is reminiscent of an era of exploitative medical racism where experiments were performed on black people without informed consent and often against their will. Can AI help health providers by providing them with information faster that can help them get to the next patient? Possibly. But the problem is that this may come at the cost of diagnostic accuracy and worsening health disparities.
And the potential impact goes beyond diagnostic accuracy. tectonic justiceAn advocacy group working to protect economically marginalized communities from the harms of AI has published an unprecedented report that estimates 92 million Americans Low-income earners “have some basic aspects of their lives dictated by AI”. how those decisions are made How much they receive from Medicaid and whether they are eligible for Social Security Administration disability insurance.
A real example of this is playing out in the federal courts right now. In 2023, a group of Medicare Advantage customers filed a lawsuit against UnitedHealthcare in Minnesota alleged that they were denied coverage because the company’s AI system, NH Predict, mistakenly deemed them ineligible. Some plaintiffs are assets of Medicare Advantage customers; These patients reportedly died as a result of denial of medically necessary care. UnitedHealth sought to dismiss the case, but in 2025, a judge ruled that the plaintiffs could proceed with certain claims. there was such a case filed In federal court in Kentucky against Humana. There, Medicare Advantage customers alleged that Humana’s use of NH Predict “rejects general recommendations based on incomplete and inadequate medical records”. That case is also ongoing, in which a judge has ruled that the plaintiffs’ legal arguments are sufficient to proceed, preventing the insurance company’s motion to dismiss. Although the final verdict of both these cases is still pending, they indicate the growing trend of AI to decide the health coverage of low-income people and its disadvantages. If you have financial resources, you can get quality healthcare. But if you don’t have a home or have a low income, AI could also prevent you from accessing healthcare altogether. That’s medical classism.
We should not experiment with AI rollouts on patients who are unhoused or low-income. The documented losses outweigh the potential, unproven benefits promised by start-ups and other tech ventures. Given the barriers that people who are unhoused and have low income face, it is important that they receive patient-centered care with a human healthcare provider who listens to their health needs and preferences. We cannot create a norm where we trust a health system in which health practitioners are left behind while AI – driven by private companies – remains ahead. An AI system that “listens” and is developed without rigorous evaluation by communities disempowers patients by removing their decision-making rights to determine what technologies, including AI, are implemented in their health care.
-
Leah Goodridge is an attorney who has worked in homelessness prevention litigation for 12 years
-
Oni Blackstock, MD, MHS, is a physician, founder and executive director of Health Justice, and a Public Voices Fellow on Technology in the Public Interest with The OpEd Project.