Promises and Pitfalls

A conversation on AI, health care, and equity with CHCF’s senior vice president for strategy and programs, Kara Carter

Getting your Trinity Audio player ready...
Kara Carter
Kara Carter, CHCF’s senior vice president for strategy and programs, is spearheading the foundation’s AI learning journey. Photo: Christopher Che

It’s hard to have a conversation in health care circles these days without the topic of artificial intelligence (AI) coming up — and for good reason. California’s imperfect health care system is loaded with problems that AI theoretically could help solve, both behind the scenes and at the point of patient care. But it’s also hard not to imagine the many ways that AI in health care could go wrong. One big fear is that AI could worsen the inequities and bias that are already baked into the health care system.

For CHCF, which constantly looks for ways to make the health care system more effective and more just, the potential and the pitfalls of AI — particularly for California’s safety net — cannot be ignored. The CHCF Blog team thought now was a good moment to check in with the foundation’s leadership to get a sense of their thinking at this stage in AI’s evolution.

I sat down recently with Kara Carter, the senior vice president for strategy and programs, who has been spearheading CHCF’s AI learning journey over the past year. Before joining CHCF in 2016, Kara was a partner at McKinsey & Company’s San Francisco and London offices. She was a leader in McKinsey’s Medicaid practice in the US, and supported public and private sector health systems in the US, UK, and Europe to improve health care quality, access, and affordability. Our conversation was lightly edited for clarity and length.

AI Terminology

In this conversation, we used AI as a broad term encompassing multiple kinds of computer systems, including but not limited to so-called “generative” or “Gen AI” systems, which are designed to mimic human intelligence. According to ChatGPT, one of the leading Gen AI platforms, generative systems “have the ability to create content, generate human-like responses, or produce original output, demonstrating a capacity for creativity and innovation.”

Q: Artificial intelligence in health care is a huge, sprawling topic. Where specifically is CHCF focusing its attention?

A: The implications for AI in health care are massive. It has come up in nearly every conversation I’ve had with health care leaders throughout California in the last several months. For our part, CHCF is really focused on what AI means for people served by Medi-Cal. We’re also focused on the impact of AI on health equity in California.

I should note that the conversations we’re having about AI aren’t theoretical or philosophical. We’re mainly interested in how AI can help fix actual pain points in the health care safety net and how it can solve the real-world problems that consumers, providers, and policymakers face.

Q: On a scale of 1 to 10, with 1 being extremely skeptical and 10 being highly optimistic, how would you rate your outlook on AI’s potential to enhance and improve the state’s health care safety net, especially for communities that rely heavily on Medi-Cal?

A: If you are asking about the potential of the technology itself to be truly useful, I’d have to rate it pretty high — either a 7 or an 8. I get that there’s a lot of hype about AI. It’s an incredibly shiny object, and I am normally very skeptical of shiny objects. Even so, I honestly believe the potential of AI to transform health care is very real — and a very big deal.

Q: What kept you from rating it a 10? Is there anything about AI in health care that you are especially worried about?

A: My concern is less about the technology itself and more about whether policymakers and health system leaders will be able to put in place the right guardrails and supports to manage its use. Will our leaders take the steps we need to protect patient privacy? Will they take steps to ensure that AI applications reduce rather than perpetuate historic inequities? Collectively as an industry, will we use the power of AI to make the health system more compassionate and patient-centered, or less so? And will leaders monitor and evaluate the use of AI so we can course-correct as needed?

How these questions get answered will ultimately determine whether AI lives up to its promise. AI may be a very powerful tool, but policy and health system leaders will need to be thoughtful and inclusive about how and where that tool gets used.

Q: What general use cases come to mind when you think about AI’s role in the health care safety net?

A: For me, two big categories are top of mind. The first is what I think of as back-end systems or operations. For example, there are lots of ways for AI to improve enrollment and the way plans engage with their members and providers with their patients. There are opportunities to streamline claims and billing, or to dramatically reduce the time doctors spend on preauthorization requests and other administrative tasks. And there is a lot of interest in “co-pilot” tools to assist with electronic medical records — specifically, by helping providers take notes and code them in real time during patient visits.

The second big use-case category is clinical support. One prime example is using AI-powered tools to provide language translation for patients who don’t speak English. Another would be emergency departments, where AI could play a helpful role with diagnosing and triaging patients. And with primary care, AI could also dramatically improve providers’ ability to exchange data from disconnected systems to gain a whole-person view of their patients. AI could be a game changer for remote patient-monitoring devices, which primary care providers can use to help their patients manage diabetes and other chronic health conditions.

Q: One of the biggest challenges facing the health care safety net is health workforce shortages. How are you thinking about AI in that arena?

A: Absolutely. For me, this is less about AI replacing the existing workforce and more about health professionals partnering with AI — using it as a tool to address existing shortages better and faster. For example, there’s a major shortage of specialists in the safety net. AI can help health systems and providers deploy the specialists they have more intelligently, and enable primary care to fill more of the gaps. Another example would be using AI to more fully integrate and empower community health workers and other peer professionals, which would increase the capacity and cultural competency of care teams.

I also think there’s a role for AI to play in training and retaining California’s health workforce. AI can reduce a lot of the paperwork and administrative tasks that contribute to provider burnout, which is a major reason why a lot of talented professionals leave the health workforce. And there are so many ways that AI can help with the training of new health professionals. Some examples include simulating patient visits to teach diagnostic and other clinical skills, improving distance learning, developing and evaluating curricula, and personalizing education for different learning styles.

Q: Let’s talk about equity. It seems like AI could be part of the solution for health equity. Or could it make inequities worse? How do we make sure we’re getting health equity gains without doing more harm?

A: The precursor to AI is algorithms, and algorithms used by major health care systems were found to have harmful levels of racial bias. Algorithms use existing care utilization data to predict future risk without accounting for the ways racial barriers affect utilization. When these algorithms then get used to deny care, which we know they do, they end up repeating and even exacerbating past inequities. Those same biases and blind spots have the potential to affect AI applications as well.

There are three ways to address that risk. First, acknowledge that racial bias is a problem and design AI tools in a thoughtful way — with the goal of eliminating rather than perpetuating historical inequities. Second, when folks get things wrong, they should be open about it. That’s what Duke Health did when they discovered that an algorithm they created to diagnose kids with sepsis was inadvertently delaying care for children in Spanish-speaking families. Their openness made it possible for others to learn from their mistakes. Third, it’s critical to involve the community in decisions about AI use in health care. At its core, this comes down to governance. Does the leadership body making decisions about AI represent and listen to input from the community? If it does, we should see more informed and equitable decisions.

Q: I have heard you argue that AI access is also an equity issue. What do you mean by that?

A: I’ve just gone through ways that AI could help solve a number of health care problems. And it’s not a given that safety net providers, which feel many of these problems most acutely, will get the investment and support they need to take full advantage of AI. The big commercial health systems will no doubt get the best that AI has to offer. They will have the resources to purchase the vast amounts of computing power that the most robust AI tools will require. Can we say the same about public hospitals and FQHCs [Federally Qualified Health Centers]? How do we all make sure they and the millions of Californians they serve are not left behind? That absolutely is an equity issue.

Q: Are there any AI opportunities that you are especially enthusiastic about?

A: I am super excited about the potential for AI-powered translation to help reduce language barriers in health care. I don’t think we will have to wait long for that. In fact, one of the first requests for proposals the state issued on AI in health care is about language access. A recent secret-shopper study showed that 20% of people who speak Spanish and tried to get a mental health appointment at their local community health center were informed that no one was available to talk to them in Spanish or just had their calls cut off. These translator tools can’t come soon enough.

I have a personal interest in seeing AI reduce the time that providers spend logging information into medical records. My dad was a dentist. I remember him staying up late at night finishing notes from his patient visits earlier in the day. In physician circles, this is called “pajama time.” We can and should expect AI to take a big chunk of that painstaking administrative work off the plate of providers. That will be huge.

Q: How do you think CHCF might be able to make a difference in the AI space? What might that look like?

A: There are several ways I see CHCF playing a constructive role in this space. We can track how safety-net organizations are using AI and share what is and isn’t working. We can identify ways for state policy to promote responsible use of AI and support safety-net organizations in making an AI transition. And we can help to pilot novel AI applications to make sure they can be used in or adapted to the safety net.

Of course, so much about AI is still unfolding. We may find ways for us to make a difference that we haven’t even thought of yet. We’re going forward with an open mind.

Christopher Che

Physics at the University of Chicago and engineering at the University of Michigan were hardly the pedigree studies for your typical photographer. While Christopher Che has had a camera in his hands since he was 10 and shot for local news and advertising agencies in college, he spent most of his career in boardrooms of Fortune 500 companies pitching emerging technologies innovated in Silicon Valley. Read More

More from the CHCF Blog