Editable machine learning models? A rule-based framework for user studies of explainability
- PDF / 858,501 Bytes
- 15 Pages / 439.37 x 666.142 pts Page_size
- 97 Downloads / 172 Views
Editable machine learning models? A rule-based framework for user studies of explainability Stanislav Vojíˇr1
· Tomáš Kliegr1
Received: 15 July 2019 / Revised: 17 August 2020 / Accepted: 19 August 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract So far, most user studies dealing with comprehensibility of machine learning models have used questionnaires or surveys to acquire input from participants. In this article, we argue that compared to questionnaires, the use of an adapted version of a real machine learning interface can yield a new level of insight into what attributes make a machine learning model interpretable, and why. Also, we argue that interpretability research also needs to consider the task of humans editing the model, not least due to the existing or forthcoming legal requirements on the right of human intervention. In this article, we focus on rule models as these are directly interpretable as well as editable. We introduce an extension of the EasyMiner system for generating classification and explorative models based on association rules. The presented web-based rule editing software allows the user to perform common editing actions such as modify rule (add or remove attribute), delete rule, create new rule, or reorder rules. To observe the effect of a particular edit on predictive performance, the user can validate the rule list against a selected dataset using a scoring procedure. The system is equipped with functionality that facilitates its integration with crowdsourcing platforms commonly used to recruit participants. Keywords Rule learning · User experiment · Crowdsourcing · Explainable Artificial Intelligence · Cognitive Computing · Legal compliance Mathematics Subject Classification 68T30
Electronic supplementary material The online version of this article (https://doi.org/10.1007/s11634020-00419-2) contains supplementary material, which is available to authorized users.
B
Tomáš Kliegr [email protected] Stanislav Vojíˇr [email protected]
1
Department of Information and Knowledge Engineering, Faculty of Informatics and Statistics, University of Economics, Prague, W. Churchill Sq. 4, 130 67 Prague, Czech Republic
123
S. Vojíˇr, T. Kliegr
1 Introduction While rule-based models are not currently considered as a main-stream topic due to their lower predictive performance than achieved by neural networks or random forests (Fernández-Delgado et al. 2014), this is changing as explainability of machine learning models is gaining on importance. For example, neural networks and random forests have the best predictive performance but are considered uninterpretable. Common examples of interpretable models are Bayesian networks and decision trees (Miller 2019). The advantage of the rule-based representation is not only good interpretability; rules are well-suited for learning and classification over relational data, which they can represent more naturally than most other methods. The use of rules is not limited to classification, as, e.g., association rule lea
Data Loading...