Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics
- PDF / 839,832 Bytes
- 14 Pages / 595.276 x 790.866 pts Page_size
- 85 Downloads / 217 Views
OPEN FORUM
Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics Johan Rochel1 · Florian Evéquoz2 Received: 30 December 2019 / Accepted: 31 August 2020 © The Author(s) 2020
Abstract Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination. We thereby identify different types of responsibility held by AI engineers and link them to concrete suggestions on how to improve professional practices. This paper contributes to the literature on AI and ethics by focusing on the work necessary to configure AI systems, thereby offering an input to better practices and an input for societal debates. Keywords Applied ethics · AI ethics · Data ethics · Data science · Responsible innovation
1 Introduction The ethics of AI has given rise to an important body of the literature covering a wide range of issues (Müller 2020). Within this body of the literature, this paper focuses on the role played by individuals in the design, development and concrete use of AI systems. More specifically, we want to identify and conceptualize the ethical questions entailed by the apparently technical work necessary to configure AI systems for a specific task. We are convinced that the technical language in which this work is wrapped should not obscure the important decisions made by individuals. The stakes are high: it is not only about the responsibility of the AI engineers in their professional activities, but also about the public good impacted by their choices. In this paper, we focus on AI systems that rely on machine learning algorithms, including deep neural network systems (Schmidhuber 2016) Enacting an AI system typically requires three iterative phases where human developers are * Johan Rochel [email protected] Florian Evéquoz [email protected] 1
Faculty of Law, University of Zürich, Zürich, Switzerland
HES-SO University of Applied Sciences, Sierre, Switzerland
2
in command. We call them “AI engineers” to underline the fact that they are practitioners programming and configuring computational operations. We map the relevant ethical questions along the main stages of the “Cross-Industry Standard Process for Data Mining” (see below). First, AI engineers prepare the data which will be used to achieve the objectives prescribed by the project leader. Second, AI engineers select and prepare the proper algorithmic tools used to analyse the data. Third, fine-tuning of the system is carried out to improve the intermediate results and to present these results to the project leader in a useful way. Our main claim is that these phases involve pra
Data Loading...