Key Takeaways
- Many large translation companies have begun to integrate AI tools into their offerings, and new AI-first start-ups are working hard to challenge these legacy organizations.
- Health care providers’ and payers’ early use of AI-enabled language access tools are focused on low-risk use cases, such as the translation of previsit instructions, preventive care reminders, and postdischarge patient notes. These tools are being tested as support for human translation and interpretation, not to replace it.
- With clear federal and state regulations, health care organizations may increasingly rely on hybrid models where AI improves efficiency and humans ensure safety and compliance
More than six million Californians — nearly one in five — speak English less than “very well.” For these residents, a language barrier can mean missed diagnoses, medication errors, and worse health outcomes. Language access services are not optional. They are a legal requirement and a cornerstone of safe, equitable care.
A new generation of AI-powered tools is beginning to reshape how health care organizations deliver interpretation and translation services. CHCF is examining what this shift means for patients, providers, and the safety net — and whether AI can responsibly expand access to care for Californians with limited English proficiency.
What Early Pilots Are Showing
Health care providers and payers are testing AI-enabled language tools cautiously, focusing on lower-risk use cases like previsit instructions, patient education materials, and routine administrative conversations. These tools are being deployed to support human interpreters — not replace them. Pilot programs at Children’s Hospital Los Angeles, Contra Costa Health System, and the California Health and Human Services Agency suggest meaningful efficiency gains are possible when AI is paired with structured human oversight.
Guardrails Matter
Federal and state regulations — including Title VI, ACA Section 1557, and Medi-Cal rules — require access to qualified human interpreters in clinical, legal, and consent-related contexts. AI may supplement these workflows but cannot substitute for them. Without clearer regulatory guidance and quality assurance benchmarks, adoption will remain fragmented. With the right supports in place, hybrid models — where AI improves efficiency and humans ensure safety — offer a responsible path forward.
This brief draws on a literature review, online research, and interviews with state and local implementers, health system leaders, community-based advocates, and AI vendors. It maps current pilots, emerging governance frameworks, and early lessons from the field.






