|
Getting your Trinity Audio player ready…
|

When Californians visit health care providers, they could encounter artificial intelligence (AI). An ambient-scribing service may help their doctor take notes more efficiently, a chatbot may assist with making an appointment, or a diagnostic tool may serve as extra eyes reviewing their mammogram result.
When paired with responsible oversight, these technologies present promising opportunities for providers. AI tools can support administrative tasks once performed by clinicians, giving them more face time with patients and maximizing their most valuable work; to reduce burnout as a way to advance the quality of care; and to improve access to providers for people facing geographic or linguistic barriers to care.
So, how do patients feel about AI being used in their care? The California Health Care Foundation partnered with research firm Culture IQ to conduct focus groups involving 172 Californians about their impressions of AI in health care settings. This listening exercise revealed that diverse patients have nuanced views of AI technology. If the use of AI enhances the quality and accessibility of care, patients welcome it. However, they want to be fully informed about the use of the technology and to know that AI is one of many tools used by providers, rather than something that replaces their doctor.
“For many patients, it isn’t about whether AI is used, but how AI is used,” said Kara Carter, the foundation’s senior vice president of strategy and programs. “A robust group is optimistic about AI if it will increase their access to high quality health care, and that is very meaningful. There are things providers can do to make patients feel reassured about AI.”
Education Is Critical
A key step providers can take is to teach them how the technologies work, when and why they are used, and what risks they carry. This gives patients the opportunity to give informed consent, and it prevents patients from feeling they have had no say in their care. “They want to know what it is and how it works in practical, easily understood terms,” said Michelle Cordoba, the founding director of Culture IQ.
Through listening sessions, Culture IQ also found that the patients who understand how AI tools work are more likely to approve their use. That was the case with patients who were shown how AI is used in interpretation services, in screenings, or in triage settings. Most offered their approval.
“A lot of the skepticism that patients have is not about AI having issues with accuracy or reliability,” Cordoba said. “It’s about a lack of knowledge.”
That’s because patients of all backgrounds want to be involved and empowered partners in their care — a dynamic that has been shown to improve care quality and satisfaction. “Education for patients on what AI is can assist with adoption of these tools and help patients advocate for themselves,” said Stella Tran, senior program investment officer at the CHCF Innovation Fund. “Developers can help providers be successful by supporting them with patient education.”
Making AI education a best practice will require adoption of a strategy, especially for safety-net providers that are strapped for time and resources. They need to develop techniques to thoroughly inform patients how AI is used but without using valuable appointment time. This may require new workflows or staff roles, said Tran. It might also entail the development of asynchronous learning resources, like interactive apps and animated videos, said Cordoba. “That’s so much more helpful than a photo or a text,” said Cordoba.
Patients Desire Trust and Transparency
With clear warnings and explanations that explain the positive outcomes of an AI tool, patients are more likely to feel comfortable with a certain technology and opt in.
Being transparent can be as simple as a provider saying, “‘This transcription service uses AI,’ or a website banner reading ‘This chatbot uses AI,’” said Tran. “It doesn’t need to be complicated. There are actionable solutions to these concerns. People just want to know they’re not being tricked.”
Concerns about transparency are especially pronounced among Black Californians, who are less likely to trust the health care system because of racism and discrimination they have faced, according to researchers who study Black attitudes. “If you don’t tell me and I’m not able to give consent for use of AI, you’re taking away my choices,” said a Black respondent.
Human Connection Remains Key
Focus group participants and survey respondents don’t want AI to disrupt their personal relationship with their primary care provider. “They want a human still here at the end of the day,” said Carter.
This is especially true for immigrant and non-English-speaking patients, who treasure linguistically and culturally concordant care. For them, knowing that an AI translation or transcription service will augment the amount of face time they get with their doctor makes them actively desire its use.
“If they understand how it is helping doctors, they are in,” said Cordoba.
For Black patients and other people of color, human oversight is an important check on risks like algorithmic bias, which they fear being adopted inadvertently by automated systems. Patients can be reassured by an auditing process that ensures data are handled responsibly and that they can access AI’s benefits.
“AI is good if there is a doctor or a professional behind it in case of failure or an emergency,” said a Latina/x respondent.
Quality of Care Is the Priority
Together, these concerns show that patients simply want the best care available Patients of all backgrounds want providers to approach AI in a way that maximizes benefit and diminishes risk.
Safety-net providers have fewer resources available to purchase and deploy AI tools even though their patients want and need AI opportunities. “AI needs to be deployed for this population,” said Tran, noting that patients with low incomes are more likely to live far from a clinic, have inflexible work schedules, and speak languages other than English. These challenges, which AI tools can mitigate, show how important safety-net patients are to the conversation.
Culture IQ’s findings are a net positive because they demonstrate that there are ways to make patients excited about technological innovations instead of fearful of them, Carter said.
The message from consumers is clear, she said.
“Anything from increasing a doctor’s bandwidth through ambient scribing to crunching data more effectively improves the quality of care,” Carter said. “These are things that patients want, but people don’t want it to happen to them. They should be active participants in creating processes and guardrails.”
Authors & Contributors

Robin Buller
Robin Buller is an Oakland-based writer, researcher, and editor. She has reported on harm reduction, maternal health, migration, housing, and policing for The Guardian, The Oaklandside, and other publications. She holds a doctorate in history from the University of North Carolina at Chapel Hill.




