XClose

UCL Institute of Healthcare Engineering

Home
Menu

Can AI in healthcare be truly ethical?

24 June 2024

picture of a man in a white lab coat with a whole in his hand. The background is brown

In late January of 2024, the WHO released an updated version of their guidelines for “Ethics and governance of artificial intelligence for health”. This update has come less than three years after the release of the original document, due to the rapid speed of AI development. The concerns that the organisation presents may not be surprising.

The WHO wants to “Promote AI that is responsive and sustainable” and “Ensure inclusiveness and equity”, along with four other core principles, following the same line. The report is filled with recommendations for member states on how they can achieve good, ethical AI implementation in their healthcare systems. But as the technology develops, so do the concerns. In the last few years, the news has been filled with stories about bias within AI models. A question emerges and dares to be asked - can AI in healthcare ever be truly ethical?

What are the models?

The WHO guidelines are focussed on one specific type of AI - Large Multimodal Models. This is a slightly broader version of the more familiar name Large Language Models (LLMs) which I will use due to its familiarity. These models are a type of generative AI. They have taken the world by storm, led by ChatGPT, the automatic chatbot from US company OpenAI. LLMs are trained on text material available on the internet, such as articles and books, and learn to recognise common patterns in how words are used until they are able to reply to prompts with language that imitates what humans use.

Generative AI models already have a range of applications within healthcare. In their guidelines the WHO outlines five main areas of use, from diagnostic care, to administrative work and education. One of the major, more specific concerns that the WHO points out is the risks that biassed systems pose. It is a complicated topic where potential substantial benefits are put up against dangers of the same size. Public opinion on the use of these models in healthcare also seems to vary. In a survey conducted by the Ada Lovelace Institute in 2023 (that is representative of the UK population), 54% of people thought that virtual healthcare assistants could be very or somewhat beneficial. At the same time, however, 55% of people also found the technology very or somewhat concerning.

white robot with big eyes looking up

Where does the bias come from?

Beatrice Taylor is in her final year of her PhD at the UCL Centre for Medical Image Computing. Her research project is looking at disease progression in rare types of dementia, using AI to find patterns and make predictions about how dementia develops. The model she works with is not an LLM - her model makes predictions but does not generate any new data. However, she faces many of the same issues that the WHO is concerned about.

Taylor explains that it is really difficult to get good data to base her study on. “We all want data. We want large datasets, which are representative of the population for people we're interested in” she says. “In an ideal world, it would represent all ethnicities, socioeconomic backgrounds, and abilities. Reality is you have all these biases as to who gets selected for these datasets.” She gets her data from the UCL Hospitals, which she points out will already have a quite narrow patient base of upper-class white people living in central London.

Whilst being excited about the potential applications of her model and the benefits it can bring, Taylor has conflicting feelings about AI and it’s fast spread across our society in general. “I think we need to be very careful with how we use it,” she says. For her, the bigger concern is funding healthcare services “so that we can employ humans to care for other humans”. She still sees potential in AI and its abilities to lessen workload but worries about us becoming all too dependent on systems that may not stay free and accessible to all.

Is it more than just bias?

Some scholars go as far as to argue that bias is not a strong enough word to describe the harm that AI can do and already does, and that the algorithms we use are actually oppressive. Emerging from black feminist scholars such as Safiya Noble and Ruha Benjamin, the idea of oppressive algorithms is based on a large body of evidence showing that the material that our AI algorithms are based on discriminates against people of colour, women, queer people, and other marginalised groups. Because of this, scholars argue that the algorithms systematically reproduce inequalities in society. The argument is different to saying that there is bias in the data, explains Simon Lock, Associate Professor in Science Communication and Governance at UCL, because it recognises that the problem will not disappear if we just get better data. “It's a direct representation of the social world and all of its structural inequities,” he says.

Attempting to base your algorithms on good inclusive data isn’t pointless, Lock clarifies. But similarly to Taylor, they urge us to look deeper, at the reasons why we are adopting AI in the first place. He, and many scholars in the field alike, point to capitalism which they say many of the issues we see today can be connected to. “It’s ultimately about leveraging capital money, and capitalism is pretty violent by nature and has an awful lot of repression baked into it”.

In their guidelines, the WHO repeatedly emphasises the immense benefits that AI can bring to healthcare if we are careful and avoid the risks. The UK population seems to share this point of view. So does Taylor, and Lock seems to also come to the same conclusion. “AI is programmed by people, so there's no reason why we couldn't program it to do different things” they say. But with the way our society is shaped today, it will take more than just being careful with the data to make AI in healthcare truly ethical.


Written by Sophia Sancho