AI, Bias, and Cultural Competency in Healthcare
I first saw bias in AI not in a lab but in my own home. My dad’s accent was often misunderstood by Alexa, a failure rooted in the limited diversity of its training data. That experience made me realize how much more serious the consequences could be in healthcare, where AI systems guide diagnosis and treatment. In my work developing AI models, I focus on how data structures themselves can encode bias, and I explore how cultural competency and thoughtful policy can ensure that AI serves all populations fairly.
Sanjay Balasubramanian
8/21/20252 min read
The first time I noticed bias in artificial intelligence, it was not in a lab or a research paper. It was in my own living room. My dad would ask Alexa a simple question, such as the weather or to play his favorite song, and more often than not Alexa would fail to understand him. His accent, shaped by years of speaking English as a second language, did not fit the patterns Alexa had been trained to recognize. I could see the frustration on his face. He was not mispronouncing words; he was speaking in a way the model had never been designed to handle.
That moment stayed with me. If a voice assistant could not understand my dad at home, what would happen when AI systems were used in healthcare, where misinterpretation could mean a missed diagnosis, the wrong dosage, or delayed treatment? I began to realize how deeply data structures, the categories and inputs that form the backbone of AI, determine who is heard and who is left behind. Alexa was not failing because my dad was unclear. It was failing because the system had been trained primarily on white, American, and European accents.
This is where cultural competency becomes critical. In medicine, cultural competency means understanding how a patient’s traditions, diet, language, and social history shape their health. In AI, it means making sure datasets reflect the diversity of the people they are supposed to serve. When voice recognition systems fail with certain accents, or when diagnostic models underperform for women and minority groups, the problem is not just technical. It is cultural.
Policy is beginning to respond to this challenge. The White House AI Bill of Rights calls for protections against algorithmic discrimination. The European Union’s AI Act requires developers to document and assess datasets for bias. In healthcare, regulators and institutions are recognizing the need for AI tools to be trained on representative clinical trial data, especially for populations historically excluded from research. These changes are important, but they also show a larger truth: fixing outcomes requires us to look at how data is structured at the very beginning.
When I started building AI models myself, I carried that realization with me. I saw how race was often collapsed into broad categories, how gender was reduced to binary labels, and how cultural context was often ignored altogether. No matter how advanced an algorithm becomes, it cannot escape the limitations of its data structures. The Alexa in my living room had already shown me the stakes. If AI fails to recognize my dad’s voice, it is an inconvenience. If AI fails to recognize the biology or symptoms of patients from diverse backgrounds, it can become a matter of life and death.
The challenge is not only to make AI more accurate but also to make it more culturally aware. That means designing models with diverse data, testing them across different populations, and building accountability into policy frameworks. It also means including not only engineers but also clinicians, social scientists, and ethicists who understand the human realities behind the numbers.
AI has the potential to transform healthcare, but only if it listens to every voice. My dad’s experience with Alexa taught me that bias in AI begins at home. My work in AI research continues to remind me that solving this problem will require more than better algorithms. It will require systems that are both technically sound and culturally competent.