Mutual Explanations for Cooperative Decision Making in Medicine

  • PDF / 1,041,553 Bytes
  • 7 Pages / 595.276 x 790.866 pts Page_size
  • 72 Downloads / 159 Views

DOWNLOAD

REPORT


PROJECT REPORT

Mutual Explanations for Cooperative Decision Making in Medicine Ute Schmid1   · Bettina Finzel1  Received: 20 October 2019 / Accepted: 2 January 2020 © The Author(s) 2020

Abstract Exploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption. Keywords  Human-AI partnership · Inductive Logic Programming · Explanations as constraints

1 Introduction Medical decision making is one of the most relevant real world domains where intelligent support is necessary to help human experts master the ever growing complexity. Since medicine is a highly sensitive domain where errors can lead to fatal errors, transparency and comprehensibilty are legal as well as and ethical requirements [24]. Therefore, the usage of standard approaches of machine learning, such as (deep) neural networks, is not recommendable because the learned models are blackbox [1]. That is, the user has only access to the input information (for instance a medical image) and the resulting classifier decision as output. The reasoning underlying this decision remains intransparent. Another challenge when applying machine learning in medicine and in many other real world domains is that the amount and quality of data often cannot meet the demands of highly data intensive machine learning approaches: Classes The work presented in this paper is part of the BMBF ML-3 project Transparent Medical Expert Companion (TraMeExCo), FKZ 01IS18056 B, 2018–2021. * Ute Schmid ute.schmid@uni‑bamberg.de Bettina Finzel bettina.finzel@uni‑bamberg.de 1



Cognitive Systems, University of Bamberg, Bamberg, Germany

are often strongly imbalanced and for many specific manifestations of clinical diagnoses data are sparse. Apart from routine diagnoses, in many cases there is no ground truth available. Diagnostic gold standard tests often have limitations in reliability as well as validity. The ultima ratio to overcome this data engineering bottleneck is to involve human who have the expertise to evaluate quality of data as well as validity of the output of learned models. In consequence, incremental and interactive approaches are promising options for making use of machine learning in medical diagnostics [13]. Starting with an initial model, new cases can be incorporated as they occur in practice, and system decisions based on e