Thank you for Subscribing to CIO Applications Weekly Brief
How AI is Helping in Mitigating Risk for Healthcare Organizations
Biased AI model outputs can occur as a result of an AI model being trained on data that does not accurately represent the population that the solution is designed to support.
Fremont, CA: Better clinical decision support, population health interventions, patient self-care, and research are just a few of the promising applications of AI (AI).
Aside from its benefits, AI can introduce risks that can undermine trust in AI solutions and must be addressed. These risks include, for example, propagating bias inherent in source data; a lack of limited transparency in computation algorithms; AI performance in a "lab setting" not extending to applications in real-world scenarios; a lack of prediction accuracy over time (model drift) because of a lack of model parameter understanding and calibration; or cybersecurity risks.
So, how can the healthcare industry maintain AI's momentum and avoid the next AI winter (a flattening of the adoption curve)? Before making significant investments in AI, several organizational prerequisites should be in place. These include a clear vision of what problems AI will help solve; in-house talent with both technical AI expertise and health domain understanding; and a review process to assess the potential risks as well as ethical implications of each AI solution. Once those conditions are met, additional steps can be taken to ensure the long-term success and return on investment of one's AI project.
In this article, we will take a look at how to mitigate three groups of prominent risks:
Inefficient Data and Algorithm Management
Biased AI model outputs can occur as a result of an AI model being trained on data that does not accurately represent the population that the solution is designed to support. For instance, if an AI solution predicts health outcomes for a general population but the data used to train the AI algorithm is restricted to senior citizens, there is a major risk that the model's predictions for other age groups will be inaccurate.
Similarly, selecting the wrong target variables for the prediction can lead to bias. Researchers discovered, for instance, that a prediction algorithm widely used by health insurers to recognize individuals likely to require health interventions exhibited significant bias. Previous health care expenditures were used as a proxy for "health status" and to forecast future needs in this case. Nevertheless, for some population segments, previous resource use is not an accurate predictor variable for healthcare needs. This algorithm did not properly represent minority populations' health care needs because they receive fewer health services and incur fewer healthcare costs than other population segments for a variety of reasons (e.g., lack of insurance coverage). Previous healthcare use was not a valid "proxy" for the health status of that minority population segment in that case. Developers must sample large data sets to ensure their training data accurately represents the population for which predictions are sought in order to minimize bias in the source data.
Malicious actors target healthcare systems on a regular and aggressive basis. We've seen four major ransomware-related outages in the healthcare sector in May and June 2021 alone. In one case, the threat actors behind the attack stole data for over 150,000 patients.
AI systems provide numerous benefits, but they are also vulnerable to cyberattacks and must be hardened. Adversary attacks can target any component of the technical delivery stack, including associated data sets and enabling infrastructure. Incorrect access credentials for development and production environments can lead to unintended consequences. Because of data aggregation and widespread use within an organization, AI carries an inherent risk of sensitive data spillage. Trust is critical to AI adoption, and nothing puts trust at risks like the prospect of security attacks, data spillage, or data breaches. As the healthcare community at large continues to be a top target for cyberattacks, it is critical to have the necessary foundational controls and measures in place to mitigate risk, including a properly trained workforce coupled with governance and data management processes that allow for secure data access and an understanding of how and where data will be used.