Altruistic algorithms: artificial intelligence in healthcare

By Tanya Gracias

Artificial intelligence (AI) refers to an algorithm or machine which can simulate intelligent human behaviour.1 AI is not one specific technology, but rather a collection defined by their ability.2 At present, two types of AI algorithms are particularly applicable in the healthcare industry – machine learning (ML) and natural language processing (NLP).3 ML enables computers to make predictions based on given data inputs. The process of model building is called “training”, as these algorithms are trained to recognise common features from a data set coined the “training data”.4 On the other hand, NLP is a form of AI that can understand language and is commonly used for information retrieval and summarisation.5 In clinical settings, NLP is used to analyse clinal notes, prepare reports, and transcribe physician-patient interactions.2 The widespread incorporation of AI in healthcare is expected to reduce diagnostic error, improve diagnostic efficiency, and lower healthcare costs.6,7 However, the use of AI in healthcare has prompted numerous ethical dilemmas, some of which are – a lack of transparency and unpredictability regarding AI algorithms, bias, accountability, and data protection and privacy.1

Lack of transparency and unpredictability is an ethical issue pertaining to ML-based AI algorithms. Many ML algorithms are based on complex neural networks which are not fully understood by experts in the field. These models are often referred to as “black boxes” because their internal workings remain unknown. Caltech professor, Yaser Abu-Mostafa, stated “we don’t know why the neural networks are working as well as they are. If you look at the math, the data that the neural network is exposed to, from which it learns, is insufficient for the level of performance it attains.”.8 It is impossible to predict how a neural network will react based on a given data set.3 Using artificial neural networks, AI can discover patterns that we have not recognised and cannot explain.7 This presents a unique problem in healthcare. For instance, consider a situation where an AI model diagnoses a patient with cancer based on a PET scan, but a physician does not know how the model has arrived at that conclusion. The physician would be unable to explain this diagnosis to their patient and further unable to provide effective treatment. That being said, though the algorithm’s decision may not be understood by physicians, it can prompt further testing and improve diagnoses. Given the importance of being able to understand AI-generated output, interpretability methods are being developed. Concept Learning Models aim to make AI predictions more understandable by imposing an intermediate step. For instance, image detection algorithms could identify components contributing to a diagnosis, in addition to providing the diagnosis itself. Additionally, convolutional neural networks (CNNs) perform image detection by dissecting an image into layers. In radiology, the output produced by a CNN could be understood by similarly breaking down the image into layers.9

In a different context, there is a lack of transparency regarding the functioning of AI algorithms, since they are proprietary information belonging to companies. Third party researchers cannot assess the validity of an AI algorithm’s output without access to its training data and predictive methodology. This is of concern because racial and gender biases have been observed in existing AI algorithms.10 Confidentiality in healthcare introduces an additional layer of complexity, further complicating the assessment of AI algorithms due to restricted access to sensitive information.

AI functions solely on computer code and logic, so it is counterintuitive to think that it can be biased. In reality, AI algorithms amplify existing biases in society because data sets (e.g., narrative notes, diagnosis, research data) used to train AI models can be inherently biased.7 For example, a commonly-used risk prediction tool in the United States was found to be racially biased. This tool was used by healthcare teams to identify patients for “high-risk care management” programs. When the algorithm selected patients, African American patients had to be considerably more ill than Caucasian patients to be chosen. Remedying this bias would increase the percentage of African American patients receiving help from 17.7% to 46.5%.10 One method to solve this problem would be to improve the quality of data sets through data cleaning techniques.7

As of January 2024, legislation in England and Wales assumes that computers do not make mistakes. This makes it difficult, legally-speaking, to challenge any computer output.11 Yet, AI does make mistakes. In 2012, AlexNet, a medical imaging model, could detect cancer with an accuracy of 89%, exceeding that of human pathologists (~70%).12 Nonetheless, an AI algorithm is not perfectly accurate. So, who assumes accountability for a misdiagnosis – the physician, hospital, software developers, or even the AI algorithm itself? Additionally, an AI algorithm may work as well as or outperform a pathologist under research conditions, though this does not guarantee that the same model would work as well under different clinical conditions.3 Thus, reliability becomes a problem.

Data protection and privacy issues arise when AI is used in healthcare. Algorithms require large data sets for training purposes. Access to healthcare data such as PET or X-ray scans and diagnoses can pose risks for data protection.3 The use of AI models indicates that clinical data is being collected and stored which could be hacked and used maliciously.1

At present, the NHS utilises AI in its daily functioning to aid physicians and nurses. This has caused some bioethical issues and increased the scope for such issues in the future. Although AI models are used, they are not understood by healthcare workers who are unable to explain how a decision was made by AI.7 Additionally, there would also be patients who would rather receive their diagnosis from a physician rather than a machine due to personal beliefs and perspectives.2 In the future, the continued integration of AI in healthcare could make physicians complacent and reliant on these systems. The increased usage of AI diagnostic systems and surgical robots could threaten job opportunities for pathologists and surgeons, respectively.

There exists a trade-off relating to the integration of AI in the healthcare industry. On one hand, AI systems can process information quickly and mitigate physician burnout.7 However, this comes at the cost of the aforementioned bioethical problems. As technology is becoming increasingly prevalent in every aspect of our lives, AI in healthcare seems inevitable. So, as AI is integrated into healthcare systems, these ethical issues should be addressed. Evidently, healthcare institutions and governmental bodies should make procedural and legal changes to limit the potentially harmful impacts of AI.

References

1.        Farhud DD, Zokaei S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare [Internet]. Vol. 50, Iran J Public Health. 2021. Available from: https://creativecommons.org/licenses/by-nc/4.0/

2.        Davenport T, Kalakota R. DIGITAL TECHNOLOGY The potential for artificial intelligence in healthcare. Vol. 6, Future Healthcare Journal. 2019.

3.        Stahl BC. SPRINGER BRIEFS IN RESEARCH AND INNOVATION GOVERNANCE Artificial Intelligence for a Better Future An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies Foreword by Julian Kinderlerer [Internet]. Available from: http://www.springer.com/series/13811

4.        Yousef M, Allmer J. miRNomics MicroRNA Biology and Computational Analysis Methods in Molecular Biology 1107 [Internet]. Available from: http://www.springer.com/series/7651

5.        Chowdhary KR. Fundamentals of artificial intelligence. Fundamentals of Artificial Intelligence. Springer India; 2020. 1–716 p.

6.        Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: Past, present and future. Vol. 2, Stroke and Vascular Neurology. BMJ Publishing Group; 2017. p. 230–43.

7.        Byrne MD. Reducing Bias in Healthcare Artificial Intelligence. Journal of Perianesthesia Nursing. 2021 Jun 1;36(3):313–6.

8.        Caltech. Can We Trust Artificial Intelligence?

9.        Saleem H, Shahid AR, Raza B. Visual interpretability in 3D brain tumor segmentation network. Comput Biol Med. 2021 Jun 1;133.

10.      Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations [Internet]. Available from: https://www.science.org

11.      d41586-024-00168-8.

12.      Sarvamangala DR, Kulkarni R V. Convolutional neural networks in medical image understanding: a survey. Vol. 15, Evolutionary Intelligence. Springer Science and Business Media Deutschland GmbH; 2022.