New UChicago initiative aims to improve health care algorithms for underrepresented groups

AI Healthcare

Algorithms have become increasingly pervasive as organizations in both the public and private sectors have sought to automate tasks that once required human intelligence. From facial recognition to decisions about creditworthiness to medical assessments, decision-makers rely on algorithms to help improve their own perceptions and judgment.

But the use of algorithms in so many domains has been accompanied by equally pervasive concerns that those algorithms may not produce equitable outcomes. What if algorithms output results that are biased, intentionally or unintentionally, against a subset of people, particularly underrepresented groups such as women and people of color? Given their application in contexts with enormous human consequences, and at tremendous scale, a biased algorithm could do significant harm.

Researchers with Chicago Booth’s Center for Applied Artificial Intelligence (CAAI) have seen the kind of harm even well-intentioned algorithms can produce. In an 2019 study, Sendhil Mullainathan—the Roman Family University Professor of Computation and Behavioral Science and the center’s faculty director—found that an algorithm used to evaluate millions of patients across the United States for enrollment in care-management programs was biased against Black patients, excluding many who should have qualified for such programs from being enrolled. Mullainathan co-authored the study with with Ziad Obermeyer of the University of California, Berkeley; Brian Powers of Boston’s Brigham and Women’s Hospital; and Christine Vogeli of Partners HealthCare.

Click here to read the full story. 

This story was first published by UChicago News.

Back to News
Related articles