Transparency as design publicity: explaining and justifying inscrutable algorithms

  • PDF / 608,510 Bytes
  • 11 Pages / 595.276 x 790.866 pts Page_size
  • 18 Downloads / 256 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

Transparency as design publicity: explaining and justifying inscrutable algorithms Michele Loi1   · Andrea Ferrario2 · Eleonora Viganò1

© The Author(s) 2020

Abstract In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of algorithmic transparency, that consists in explaining algorithms as an intentional product, that serves a particular goal, or multiple goals (Daniel Dennet’s design stance) in a given domain of applicability, and that provides a measure of the extent to which such a goal is achieved, and evidence about the way that measure has been reached. We call such idea of algorithmic transparency “design publicity.” We argue that design publicity can be more easily linked with the justification of the use and of the design of the algorithm, and of each individual decision following from it. In comparison to post-hoc explanations of individual algorithmic decisions, design publicity meets a different demand (the demand for impersonal justification) of the explainee. Finally, we argue that when models that pursue justifiable goals (which may include fairness as avoidance of bias towards specific groups) to a justifiable degree are used consistently, the resulting decisions are all justified even if some of them are (unavoidably) based on incorrect predictions. For this argument, we rely on John Rawls’s idea of procedural justice applied to algorithms conceived as institutions. Keywords  Machine learning · Transparency · Explanations · Justifications · Philosophy of science · Computing methodologies ~ Artificial intelligence · Cognitive science · Machine learning  · Human-centered computing ~ HCI theory · Concepts and models

Introduction In this paper, we provide a new theory of algorithmic transparency, with a focus on both explanations and justifications, where we consider as “algorithms” those human artifacts stemming from the training of machine learning models on digital data, in order to generate predictions to assist or automate decision-making. These algorithms are subject to intense scrutiny for both technical and moral reason, as their applications in product and services is constantly increasing, as well as their potential to affect everyone’s lives. Examples

* Michele Loi [email protected] 1



Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland



ETH Zurich, Zurich, Switzerland

2

come from credit scoring, to digital financial coaching and job assistants, automated insurance claim processing bots, smart home services, online dating platforms, autonomous driving solutions and poli

Data Loading...