Rise of the machines: Can AI eliminate bias in public healthcare?

The development of Artificial Intelligence has become part of the public consciousness more and more in recent months. Within the realm of healthcare, AI algorithms are already in use in some areas to diagnose and detect disease, predict outcomes and guide treatment. More and more research is being commissioned to look at newly developed apps harnessing AI to support speedier, more efficient practices in vast areas of medicine and AI is being used to accelerate drug discovery and identify potential new cures. The potential benefits of AI in all arenas of healthcare are manifold and its champions view the growth of AI as heralding an entirely new era for medical breakthroughs.

However, as with all seed changes throughout humanity, the dawn of AI has raised some difficult ethical questions and has magnified the disparities already apparent within healthcare. “One of the problems that’s been shown in AI in general and in particular for medicine is that these algorithms can be biased, meaning that they perform differently on different groups of people,” says Paul Yi, Director of the University of Maryland Medical Intelligent Imaging Center.

The preventable differences in disease burden, injury and access to healthcare disproportionately affecting people of colour and underserved communities has already been widely documented and studied. The problem now arising is whether the development of AI algorithms in healthcare will further widen these health disparities. The concerns about this are so grave that the WHO itself has called this month for caution when using AI for public healthcare, citing worries that the data used to train AI tools could be biased or misleading.

If the data used to train AI algorithms underrepresents a particular gender or ethnic group then the algorithm will not perform accurately, thus compounding already existent inequalities relating to socioeconomic status, ethnicity, gender or sexual orientation. “Algorithmic bias presents a clear risk of harm that… is often disproportionately distributed to marginalised populations,” says Benjamin Collins of Vanderbilt University Medical Center.

To reduce bias in AI, developers, inventors and researchers of AI-based medical technologies need to consciously prepare for avoiding it by proactively improving the representation of certain populations in the dataset,” said Bertalan Meskó, Director of the Medical Futurist Institute in Budapest. The issue of bias in medical research is not a new one but with the advent of AI and its use within public healthcare these issues become more pronounced.

Share:

Facebook
X
LinkedIn
WhatsApp

More Blogs from LDA

The Power of Networks: Why Rare Diseases Demand Collaboration

Senior moderator Leigh Hart explores the role of pharmaceutical companies in the Rare Disease Networks, what was planned and how it has worked out. Rare diseases are far from rare. An estimated 300 to 400 million people live with one, yet their needs are often invisible to health systems, researchers, and the medical sciences industry. These conditions aren’t just “rare”

Continue reading »

Can AI replace human interaction in market research?

Senior Moderator Andrew Grant explores the capabilities and limitations of AI in qualitative research, sharing first hand insights from an experiment using Google Gemini to conduct an interview. This week, I’ve been trying to replace myself with AI.  In 2023, a Goldman Sachs report (https://www.bbc.co.uk/news/technology-65102150) estimated that AI would supplant up to 300 million jobs worldwide in the next few

Continue reading »

A Call to Action for Inclusive Research

This Chronic Disease Awareness Day patient advocate Roxanne Murray strives to ensure no one is left unseen or unheard in the MS, chronic illness and disability community The Invisibility of MS: What You Don’t See Still Deserves Support As I approach my 19th year of living with Multiple Sclerosis, I’ve come to understand just how misunderstood this condition can be.

Continue reading »

Contact us

Mandatory fields *