In the illustration there are four fishes which swim in a sea made of words. Image: Freepik.com.

Navigating Bias in AI-Driven Healthcare: Challenges and Solutions

30.09.2024

AI-driven healthcare holds great promise but faces significant challenges in addressing biases against women and minorities, requiring ethical solutions for fair and equitable implementation.

In the health community, the conversation often centers around artificial intelligence (AI) as the future of healthcare, a technology poised to address various crises in the field. AI has been an integral part of medicine for over a decade, predominantly utilized in radiology for imaging purposes. Its potential to transform healthcare is immense, yet a critical concern within AI applications in healthcare is the presence of biases, particularly against women and minorities. Understanding and addressing these biases is crucial for the ethical and effective deployment of AI in healthcare.

Understanding Bias in Healthcare

Bias is an inherent human trait that has evolutionary benefits but poses substantial challenges in medicine, particularly given its historically male-centric focus. Medical research and testing have traditionally been conducted predominantly on men, resulting in a systemic bias in healthcare education and practices. As noted by the European Society of Cardiology, heart attacks have long been viewed as a predominantly male condition. This outdated perspective contributes to the underdiagnosis and undertreatment of heart attacks in women, who may misinterpret their symptoms as stress or anxiety.

In the medical field, bias can be categorized into three types: data-driven, algorithmic, and human. AI algorithms trained on biased datasets can perpetuate existing societal biases, leading to severe consequences such as the misdiagnosis of specific patient groups, including women and ethnic minorities. This can exacerbate health disparities and result in fatal errors. Numerous reports highlight that algorithms often discriminate against vulnerable groups, even in fields where AI shows promise. If AI systems are not properly trained, they can amplify the human biases embedded in electronic health records, resulting in discriminatory outcomes.

The Impact of Bias on AI Algorithms

Algorithmic bias extends beyond race, significantly affecting gender disparities. For instance, cardiovascular disease prediction models, claiming to predict heart attacks years in advance, are often trained on predominantly male datasets. Since cardiovascular disease manifests differently in men and women, an algorithm trained primarily on male data may fail to diagnose women accurately.

Data limitations and the lack of diversity in clinical datasets are critical issues that contribute to bias. Moreover, unconscious biases from researchers and clinicians can seep into AI algorithms, making them biased by design. If ethical considerations are not prioritized, AI’s implementation in clinical practice might fail to deliver equitable benefits, further increasing health disparities.

Addressing the Problem: AI Fairness

 Several strategies can be implemented to ensure equitable AI use in healthcare. Open science practices are essential for fostering fairness. These include:

1. Participant-Centered Development: Involving patients and participants in the development of AI algorithms and engaging in participatory science.

2. Responsible Data Sharing: Implementing inclusive data standards that support interoperability and responsible data sharing.

3. Code Sharing: Sharing AI algorithms that can synthesize underrepresented data to address biases.

Future research must focus on developing standards for AI in healthcare that promote transparency and data sharing while safeguarding patient privacy. Standardization is crucial for making data interoperable and impactful. When data are inconsistent, published in incompatible formats, or of varying quality, it hampers the ability to exchange, analyze, and interpret them effectively. High-quality training datasets are vital for developing fair AI systems.

Participant-Centered Development

Involving patients in the development of AI algorithms ensures that the technology addresses the needs and concerns of those it aims to serve. This participatory approach can help identify potential biases early in the development process and promote more equitable healthcare outcomes.

Responsible Data Sharing

Implementing inclusive data standards is essential for responsible data sharing. These standards should support interoperability, allowing data from diverse sources to be integrated and analyzed effectively. Inclusive data standards help ensure that AI systems are trained on comprehensive datasets that reflect the diversity of the patient population.

Code Sharing

Sharing AI algorithms openly that can synthesize underrepresented data is crucial for addressing biases. Openly sharing code allows researchers to collaborate and improve algorithms, making them more robust and equitable. This practice promotes transparency and helps build trust in AI systems.

Governance and Trust

Effective governance is crucial for building trust in AI systems. Clear guidelines and laws are necessary to ensure the ethical deployment of AI in healthcare. Initiatives like the AI Act represent a step towards establishing a regulatory framework to govern AI applications in healthcare, ensuring they are fair, transparent, and equitable.

Governance should also include continuous monitoring and evaluation of AI systems to detect and mitigate biases as they arise. Regular audits and updates to AI algorithms can help maintain fairness and effectiveness. Furthermore, involving diverse stakeholders, including ethicists, patient advocates, and minorities, in the governance process can provide valuable insights and promote inclusivity.

Public awareness and education about AI in healthcare are also crucial for building trust. Transparency in how AI systems are developed, validated, and used can help demystify the technology and alleviate concerns about bias and discrimination.

CyberCare Kymi project is co-funded by European union via Regional Council of Kymenlaakso, Just Transition Fund (JTF) of the European union. The project duration is 1.11.2023–31.12.2025.

Sources

Antipolis., A. 2021. Heart attack diagnosis missed in women more often than men. Web page. Available: https://www.escardio.org/The-ESC/Press-Office/Press-releases/Heart-attack-diagnosis-missed-in-women-more-often-than-in-men [Accessed 15.7.2024]

Park., Y. 2021. IBM researchers investigate ways to help reduce bias in healthcare AI. Web page. Available: https://research.ibm.com/blog/ibm-reduce-bias-in-healthcare-ai [Accessed 15.7.2024]

Aellen,F.M., Faraci,F.D., Hu,Q., Norori,N., Tzovara,A. 2021. Addressing bias in big data and AI for health care: A call for open science. Web page. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8515002/ [Accessed 15.7.2024]

Writer Janine Klauenbösch

Writer works in CyberCare Kymi project as a cybersecurity expert in South-Eastern Finland University of Applied Sciences.