How flaws in a clinical algorithm design are causing more harm than good

More and more, artificial intelligence (AI) is being used to make decisions—evaluating job applications, approving home loans, and even predicting who will be more likely to commit a crime. However, AI is designed by humans. That means these algorithms can often be built on homogenous data sets, questionable rules, and implicit biases, while omitting environmental factors—all of which can have a negative impact on a person’s access to healthcare. 

AI in healthcare is sometimes used as a risk prediction tool designed to make responses to health issues more individualized and equitable. Algorithms use past data to determine who would benefit the most from certain programs and treatment options, and how much insurance companies will cover. The data used for making these decisions comes from information on factors such as health care costs, treatment adherence, and utilization of services. 

In healthcare, the algorithm used to make critical decisions regarding treatment uses health costs as the intermediary. Patients who receive more care appear to be sicker than others and require more healthcare spending. But can spending alone really determine who is worthy of more care?

Can we trust data?

A study published in Science in 2019 reported that racial bias had been detected in health algorithms. The study found that a widely used algorithm was less likely to refer Black people than white people who were equally sick for extra care. This algorithm is used by hospitals and insurers to help evaluate and manage care for 200 million people in the United States each year. These data concluded that Black people are healthier than their white counterparts because less money is spent on their care.

Since then, Optum, the company that made the algorithm, has denied racial bias, stating that there were other parts of the algorithm that were not applied.

What algorithms miss are the social determinants of health that can impact outcomes, such as access to local resources, medical providers denying care, high prescription costs, and low health literacy.

“The algorithm is not racially biased,” a spokesperson responded. The study in question mischaracterized a cost prediction algorithm used in a clinical analytics tool based on one health system’s incorrect use of it, which was inconsistent with any recommended use of the tool. The algorithm is designed to predict future costs that individual patients may incur based on past health care experiences and does not result in racial bias when used for that purpose—a fact with which the study authors agreed.

Making healthcare decisions based on race is proving to be an obsolete practice, as race is a social construct, not a biological one. But taking race out of the equation can also be detrimental since adding factors such as race, gender, socioeconomic background, and disability can reveal disparities and help target efforts to improve care overall. It’s a delicate balance not easily found.

The algorithms used in healthcare discriminate against the people who need care the most by giving more credit to non-Black patients. For example, in nephrology–kidney care–a tool called STONE is used to predict whether a patient has kidney stones. This calculator uses sex (gender), timing, origin (i.e., race), nausea, and erythrocytes (red blood cells) as factors to determine a person’s score. A low score indicates low probability of having kidney stones, while a high score shows the opposite. But this tool inexplicably adds three points to patients who are “non-black”. The same kind of scoring rubric is used for the Get With The Guidelines-Heart Failure Risk Score.

This kind of feedback could sway physicians away from giving Black patients a thorough evaluation for kidney stones and lead to a misdiagnosis. Race is now optional on these forms, and medical staff are encouraged to consider more than just a checklist when determining the next steps in care. Why does race have to be included at all?

AI and people living with HIV

People living with HIV/AIDS are also subjected to biases in AI. It’s not what the algorithm is saying about their needs, but more about what it doesn’t consider. Faye Cobb Payton, PhD, Chief Programs Officer at Kapor Center in Oakland, California, says that it’s often the small data that get left out of the picture. “What the big data will not do is show an understanding of the nuances that come with people living with HIV itself. Small data, like the human condition and lived experiences, are overlooked.”

What algorithms miss are the social determinants of health that can impact outcomes, such as access to local resources, medical providers denying care, high prescription costs, and low health literacy. Payton explains that a person’s community and access level must be considered when using algorithms to decide who needs care.

“Where one actually lives, works, and plays can be a mismatch in terms of what an algorithm could potentially tell you about a person, particularly communities of color. I worked on a project in a rural environment where the antiretrovirals were available to persons living with HIV but they weren’t accessible because the people didn’t have affordable transportation. So, when you think of those kinds of things, the algorithms must be paired with some understanding of communities and lived experiences. Without that context, we miss the mark.

“What AIDS has shown us, and probably COVID has heightened, is that no one is stereotypical. Everything is behavioral-based. What the data will not do is show us the nuances that come with people living with HIV itself.”

Calling them out

How do we ensure that the algorithms are more equitable going forward? For one, development teams should be composed of a heterogeneous group of experts from diverse backgrounds and professional experiences. Also, call out the biases on a regular basis. If professionals and researchers who have experience in evaluating algorithms cite the biases they see frequently, it sends a message to private companies to be careful about the systems they trust. Finally, the algorithms being used right now need to be audited and reformulated to eliminate factors such as costs and race unless explicitly connected to other determinants of health.

If everyone started out on the same proverbial playing field and access was equal and uninterrupted, this AI thing could be an even bigger hit. Unfortunately, that’s not how our world works. And it’s not how healthcare works either. Data creates a picture of the work that needs to be done. But one thing it cannot do is erase the human experience.