Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slic

  • PDF / 2,427,830 Bytes
  • 4 Pages / 595.276 x 790.866 pts Page_size
  • 119 Downloads / 255 Views

DOWNLOAD

REPORT


SHORT REPORT

Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices Paul Windisch 1,2 & Pascal Weber 1 & Christoph Fürweger 1,3 & Felix Ehret 1 & Markus Kufeld 1 & Daniel Zwahlen 2 & Alexander Muacevic 1 Received: 22 April 2020 / Accepted: 22 May 2020 # Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Purpose While neural networks gain popularity in medical research, attempts to make the decisions of a model explainable are often only made towards the end of the development process once a high predictive accuracy has been achieved. Methods In order to assess the advantages of implementing features to increase explainability early in the development process, we trained a neural network to differentiate between MRI slices containing either a vestibular schwannoma, a glioblastoma, or no tumor. Results Making the decisions of a network more explainable helped to identify potential bias and choose appropriate training data. Conclusion Model explainability should be considered in early stages of training a neural network for medical purposes as it may save time in the long run and will ultimately help physicians integrate the network’s predictions into a clinical decision. Keywords Deep learning . Explainability . Machine learning . Artificial intelligence . Gliobastoma . Vestibular Schwannoma

Introduction The brain in general and brain tumors in particular have been of interest to the artificial intelligence community almost since neural networks gained traction due to increased computational power and more advanced algorithms roughly starting in 2010 [1]. Since then, several studies have been published, initially focusing on detecting intracranial diseases and later covering other, more advanced tasks like segmentation, response assessment, and outcome prediction while achieving diagnostic accuracies that have been able to compete with and in some cases even surpass trained physicians [2–4].

* Paul Windisch [email protected] 1

European CyberKnife Center, Munich, Germany

2

Department of Radiation Oncology, Kantonsspital Winterthur, Winterthur, Switzerland

3

Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany

However, some aspects of neural networks have drawn criticism, especially from clinicians, and contribute to the considerable delay between the publication of an important research paper and the translation thereof into clinical practice. Even if researchers are able to achieve impressive metrics for their model and those metrics can be reproduced and remain stable when the model is deployed at other institutions which is not always the case, the reasons for a neural network to make a decision are often unknown. This remains a problem and negatively affects the willingness of physicians to rely on the model’s decisions when deciding on the management of a patient [5]. Since this topic gained coverage in the scientific litera