Search

UK scientists caution against embedding health inequalities through AI

Researchers from the University of Leicester and University of Cambridge highlighted systemic bias in data that AI systems are trained on

A circuit board is displayed with the nearer components in focus
Pixabay/jplenio

UK epidemiologists have called for the responsible implementation of artificial intelligence (AI) – taking into account systemic bias within training data.

Writing in Journal of the Royal Society of Medicine, researchers from the University of Leicester and University of Cambridge highlighted that the growth of AI must not exacerbate existing global and ethnic health inequalities.

The authors highlighted that ethnic minorities are under-represented within the published research ¬– with smaller sample sizes leading to lower levels of statistical certainty.

This has the potential to result in harm, for example, by leading to ineffective drug treatments.

“If the published literature already contains biases and less precision, it is logical that future AI models will maintain and further exacerbate them,” the researchers noted.

The authors also shared the potential for AI to worsen health inequalities in low and middle income countries, with most AI models developed in wealthier nations.

If large language models are trained on published data for populations that are vastly different from those in low and middle income countries, then the authors reasoned that there will be limited outputs that are “widely generalisable and truly inclusive.”

The authors recommend a series of actions to avoid exacerbating health inequalities.

These include clearly describing the body of data used in the development of an AI system, addressing ethnic health inequalities in research and ensuring that data used to train AI is adequately representative.

The researchers shared that further study is required to understand the generalisability of large language models and other AI models to ethnically diverse populations.