Redefining diagnostics: the integration of machine learning in medical imaging

By Federico Boiardi

Medical imaging plays a vital role in modern healthcare, enabling practitioners to non-invasively access anatomical information for accurate disease diagnosis and management. The integration of machine learning (ML) techniques into medical imaging now has the potential to revolutionise diagnostics by improving accuracy, efficacy, and personalised care. Numerous ML models have shown promising results in enhancing diagnostic accuracy across various imaging modalities,1,2,3 with some receiving FDA approval in recent years.4 With the rapid advancement of computational technology and artificial intelligence, ML will inevitably become a ubiquitous diagnostic tool, assisting physicians and possibly replacing human judgement altogether in some areas.

Figure 1. Methodological approach for making a brain tumour classifier

ML involves the development of algorithms that can learn to make predictions based on relationships within data. In medical imaging, ML algorithms can be trained to recognise complex patterns in images, such as the presence of tumours or other pathological features. Such algorithms typically employ supervised learning, wherein a labelled dataset is used to train the ML model, enabling it to discern correlations between input data and desired output labels.5 To illustrate this, consider the following high-level protocol implemented with Python to design a basic supervised classifier to detect brain tumours in transverse MRI scans (see Figure 1). It proceeds as follows:

Figure 2. Sample of MR images from the dataset

Step 1: Dataset. A labelled dataset comprising nearly 1500 brain MR images was acquired from Kaggle (see Figure 2). That is, each scan in the dataset is annotated with a corresponding label, indicating the presence or absence of a tumour. The dataset was divided into training and test sets using stratified random sampling, with an 80% training and 20% test split. Naturally, the training set is used to train the model, while the test set evaluates the model’s performance on previously unseen MR images by comparing its predictions to the true target values.

Figure 3. Augmented MRI scan

Step 2: Data augmentation. Rotations within a range of ±10° were applied to each image in the training set to simulate natural variations that might occur between MRI scans, as seen in Figure 3. This increases the number of samples in the training set, which can help improve the classifier’s performance.

Figure 4. Automated skull cropping

Step 3: Image processing. All scans in both sets were processed to ensure consistency and enhance training performance. This involved cropping the images to remove irrelevant background noise (see Figure 4), resizing them to a dimension of 256×256 pixels, and normalising the pixel ranges.

Figure 5. MRI scan before and after HOG

Step 4: Feature extraction. Features were extracted from the processed images using the Histogram of Oriented Gradients (HOG) technique. Crucially, this step transforms the raw pixel data into a more compact, informative representation that ML algorithms can effectively use, as illustrated in Figure 5.

Step 5: Training. Extracted features from the training set were then used to train a support vector machine – a widely-used classification algorithm in healthcare.6

Step 6: Evaluation. Finally, the trained model was evaluated on the test set. This model was evaluated based on precision and recall, achieving 92% and 95%, respectively. In other words, the model accurately predicted the presence of a brain tumour in 92% of cases, and correctly identified 95% of all brain tumours in the test set. Further improvements could be made depending on the model’s application. For instance, optimising recall in this scenario may be prioritised to reduce the proportion of false-negative diagnoses.

ML applications extend beyond classification, with segmentation being another critical area of focus in medical imaging.7 Segmentation involves identifying distinct structures within an image, which can be particularly valuable in delineating tumour boundaries or identifying specific tissue types. This versatility of ML highlights its immense potential to improve patient care across diverse medical domains.

The integration of ML in medical imaging holds several implications for radiology practice and patient care. This includes improved diagnostic accuracy, with ML algorithms reducing errors by identifying subtle or complex patterns that may be overlooked even by a trained eye. Computational approaches are also unaffected by shortcomings in human judgement that may affect patient outcomes (e.g., fatigue). Likewise, ML algorithms could provide increased diagnostic efficiency. The automated analysis of medical images conserves time and resources, alleviating the workload for radiologists and enabling them to focus on more challenging cases. This is particularly relevant, given that the average radiologist must interpret one CT or MR image every 3-4 seconds to meet workload demands.8

Alas, while the incorporation of ML in medical imaging offers numerous benefits, it also raises several ethical and practical concerns. Datasets used to develop accurate ML models could compromise patient privacy, for example. The use of large patient datasets for training is always a delicate matter, as merely removing metadata (e.g., patient names) is often insufficient to preserve privacy.9 Algorithmic biases are similarly worrisome. ML models may inadvertently perpetuate or exacerbate existing biases in healthcare, especially when training samples are not representative of diverse populations.

Further complicating the matter, many modern ML models – particularly deep learning algorithms – can be enigmatic. Often considered “black boxes”, it is sometimes difficult to understand how they arrive at their conclusions.10 This can make trusting and integrating these models challenging, especially in a clinical setting.

Several key areas of future research and development aim to improve ML. Deep learning approaches like convolutional neural networks (CNNs) are one such focus. These are quickly becoming a prevalent force in medical image analysis,7 eliminating the need for manual image feature identification. Instead, CNNs ensure that features are discerned autonomously during the learning process itself. Methods to broaden ML applications to a wider array of clinical scenarios are also being researched, including image-guided interventions and therapy planning.

In general, more varied yet relevant data enhances the accuracy of ML models. Multimodal data integration promises to deliver precisely this. By combining information from different imaging modalities (e.g., CT, MRI, and PET) alongside patient data (e.g., genomics and clinical records), novel multimodal extensions provide a more comprehensive understanding of a patient’s condition, ultimately improving diagnostic accuracy.

The integration of ML in medical imaging represents a significant paradigm shift in diagnostics and patient care. However, implementing ML in medical imaging also raises important ethical and practical concerns that must be addressed. As the field continues to evolve, ongoing research and development efforts will be crucial in harnessing the full potential of ML and ultimately transforming how we diagnose and treat disease.

References:

  1. Kooi T, Litjens G, van Ginneken B, Gubern-Mérida A, Sánchez CI, Mann R, et al. Large scale deep learning for computer aided detection of mammographic lesions [Internet]. ScienceDirect. Elsevier; 2016. Available from: https://doi.org/10.1016/j.media.2016.07.007
  2. Gulshan V, Peng L, Coram M., et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs [Internet]. JAMA Network. 2016. Available from: https://doi.org/10.1001/jama.2016.17216
  3. Lakhani P, Sundaram B. Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks [Internet]. Radiology. 2017. Available from: https://pubs.rsna.org/doi/10.1148/radiol.2017162326
  4. Benjamens S, Dhunnoo P, Meskó B. The state of Artificial Intelligence-based FDA-approved Medical Devices and algorithms: An online database [Internet]. Nature. Nature Publishing Group; 2020. Available from: https://doi.org/10.1038/s41746-020-00324-0
  5. Soni D. Supervised vs. Unsupervised Learning [Internet]. Medium. Towards Data Science; 2020. Available from: https://towardsdatascience.com/supervised-vs-unsupervised-learning-14f68e32ea8d
  6. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial Intelligence in healthcare: Past, present and future [Internet]. Stroke and Vascular Neurology. BMJ Specialist Journals; 2017. Available from: https://doi.org/10.1136/svn-2017-000101
  7. Litjens G, Kooi T, Ehteshami Bejnordi B, Arindra Adiyoso Setio A, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis [Internet]. ScienceDirect. Elsevier; 2017. Available from: https://doi.org/10.1016/j.media.2017.07.005
  8. McDonald RJ, Schwartz KM, Eckel LJ, et al. The Effects of Changes in Utilization and Technological Advancements of Cross-Sectional Imaging on Radiologist Workload [Internet]. Academic Radiology. 2015. Available from: https://doi.org/10.1016/j.acra.2015.05.007
  9. Rocher L, Hendrickx JM, de Montjoye Y-A. Estimating the success of re-identifications in incomplete datasets using generative models [Internet]. Nature. Nature Publishing Group; 2019. Available from: https://doi.org/10.1038/s41467-019-10933-3
  10. Castelvecchi D. Can we open the black box of AI? [Internet]. Nature News. Nature Publishing Group; 2016. Available from: https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731