Artificial intelligence, transparency, and public decision-making

  • PDF / 697,031 Bytes
  • 10 Pages / 595.276 x 790.866 pts Page_size
  • 37 Downloads / 235 Views

DOWNLOAD

REPORT


OPEN FORUM

Artificial intelligence, transparency, and public decision‑making Why explanations are key when trying to produce perceived legitimacy Karl de Fine Licht1 · Jenny de Fine Licht2 Received: 11 December 2019 / Accepted: 28 February 2020 © The Author(s) 2020

Abstract The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring. Keywords  Artificial intelligence · Transparency · Public decision-making · Perceived legitimacy · Explainability · Framework

1 Introduction Artificial intelligence (AI) is becoming more prevalent in every aspect of our lives. In particular, the increasing use of AI technologies and assistants for decision-making in public affairs—in taking policy decisions or authoritative decisions regarding the rights or burdens of individual citizens—has sparked a lively debate on the benefits and potential harms of self-learning technologies. This debate ranges from hopes of fully informed and objectively made decisions to fears for the destruction of mankind (e.g., Pasquale 2015; O’Neil * Karl de Fine Licht [email protected] Jenny de Fine Licht [email protected] 1



Management and Economics, Chalmers Tekniska högskola AB Technology, 412 96 Göteborg, Sweden



School of Public Administration, University of Gothenburg, PO Box 712, 405 30 Göteborg, Sweden

2

2016; Bostrom 2017).1 To prevent negative outcomes and create accountable systems that individuals can trust, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent (e.g., O’Neil 2016; Wachter et al. 2017; Floridi et al. 2018). This “opening up” will make it easier for us to understand (interpret) the functioning of the AI as well as possible to receive explanations for ind