Interpreting SVM for medical images using Quadtree
- PDF / 9,568,965 Bytes
- 21 Pages / 439.642 x 666.49 pts Page_size
- 39 Downloads / 194 Views
Interpreting SVM for medical images using Quadtree Prashant Shukla1 Manish Kumar1
· Abhishek Verma1 · Abhishek 1 · Shekhar Verma1 ·
Received: 25 July 2019 / Revised: 22 May 2020 / Accepted: 28 July 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract In this paper, we propose a quadtree based approach to capture the spatial information of medical images for explaining nonlinear SVM prediction. In medical image classification, interpretability becomes important to understand why the adopted model works. Explaining an SVM prediction is difficult due to implicit mapping done in kernel classification is uninformative about the position of data points in the feature space and the nature of the separating hyperplane in the original space. The proposed method finds ROIs which contain the discriminative regions behind the prediction. Localization of the discriminative region in small boxes can help in interpreting the prediction by SVM. Quadtree decomposition is applied recursively before applying SVMs on sub images and model identified ROIs are highlighted. Pictorial results of experiments on various medical image datasets prove the effectiveness of this approach. We validate the correctness of our method by applying occlusion methods. Keywords Non linear classification · Interpretability · Localization · Quadtree
1 Introduction Machine learning models are required not only to be optimized for task performance but also to fulfil other auxiliary criteria like interpretability. If a model can explain its prediction Prashant Shukla
[email protected] Abhishek Verma [email protected] Abhishek [email protected] Shekhar Verma [email protected] Manish Kumar [email protected] 1
Department of IT, Indian Institute of Information Technology Allahabad, Deoghat, Jhalwa, Allahabad, UP, India
Multimedia Tools and Applications
which can be converted into knowledge giving the insight of the domain, the model is considered to be interpretable [14]. SVM classifies the linearly separable datasets with high accuracy, but if the nonlinear dataset needs to be classified, we apply kernel trick to transform the data into another dimension. These kernel SVMs use implicit mapping making it challenge to have an institutive understanding of the prediction.Though the model classifies the data with high accuracy, the separating hyperplane is unknown. The hyperplane can be used to classify a new instance, but the nature of the hyperplane is not known in the feature space. The explanation of instances becoming support vectors is also unexplainable. Hence, to explain the predictions of nonlinear SVM is a challenge, and the model behaves like a black-box. The interpretation of the results becomes vital in the case of medical image classification. The diagnosis of a medical condition without determining the association of the underlying disease with the manifestation is unacceptable. However, we can observe that in medical image classification, the manifestation of the disease is a global function o
Data Loading...