A decision-theoretic approach for model interpretability in Bayesian framework

  • PDF / 2,178,341 Bytes
  • 22 Pages / 439.37 x 666.142 pts Page_size
  • 44 Downloads / 263 Views

DOWNLOAD

REPORT


A decision‑theoretic approach for model interpretability in Bayesian framework Homayun Afrabandpey1   · Tomi Peltola1 · Juho Piironen3 · Aki Vehtari1 · Samuel Kaski1,2 Received: 10 January 2020 / Revised: 3 July 2020 / Accepted: 11 August 2020 © The Author(s) 2020

Abstract A salient approach to interpretable machine learning is to restrict modeling to simple models. In the Bayesian framework, this can be pursued by restricting the model structure and prior to favor interpretable models. Fundamentally, however, interpretability is about users’ preferences, not the data generation mechanism; it is more natural to formulate interpretability as a utility function. In this work, we propose an interpretability utility, which explicates the trade-off between explanation fidelity and interpretability in the Bayesian framework. The method consists of two steps. First, a reference model, possibly a black-box Bayesian predictive model which does not compromise accuracy, is fitted to the training data. Second, a proxy model from an interpretable model family that best mimics the predictive behaviour of the reference model is found by optimizing the interpretability utility function. The approach is model agnostic—neither the interpretable model nor the reference model are restricted to a certain class of models—and the optimization problem can be solved using standard tools. Through experiments on real-word data sets, using decision trees as interpretable models and Bayesian additive regression models as reference models, we show that for the same level of interpretability, our approach generates more accurate models than the alternative of restricting the prior. We also propose a systematic way to measure stability of interpretabile models constructed by different interpretability approaches and show that our proposed approach generates more stable models. Keywords  Interpretable machine learning · Bayesian predictive models

1 Introduction and background Accurate machine learning (ML) models are usually complex and opaque, even to the modelers who built them (Lipton 2018). This lack of interpretability remains a key barrier to the adoption of ML models in some applications including health care and economy.

Editors: Ira Assent, Carlotta Domeniconi, Aristides Gionis, Eyke Hüllermeier. * Homayun Afrabandpey [email protected] Extended author information available on the last page of the article

13

Vol.:(0123456789)



Machine Learning

To bridge this gap, there is growing interest among the ML community to interpretability methods. Such methods can be divided into (1) interpretable model construction, and (2) posthoc interpretation. The former aims at constructing models that are understandable. Posthoc interpretation approaches can be categorized further into (1) model-level interpretation (a.k.a. global interpretation), and (2) prediction-level interpretation (a.k.a. local interpretation) (Du et al. 2018). Model-level interpretation aims at making existing black-box models interpretable. Prediction-level inte