Embedding Values in Artificial Intelligence (AI) Systems

  • PDF / 908,949 Bytes
  • 25 Pages / 439.37 x 666.142 pts Page_size
  • 13 Downloads / 320 Views

DOWNLOAD

REPORT


Embedding Values in Artificial Intelligence (AI) Systems Ibo van de Poel1  Received: 13 April 2020 / Accepted: 18 August 2020 © The Author(s) 2020

Abstract Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. This account understands embodied values as the result of design activities intended to embed those values in such systems. AI systems are here understood as a special kind of sociotechnical system that, like traditional sociotechnical systems, are composed of technical artifacts, human agents, and institutions but—in addition—contain artificial agents and certain technical norms that regulate interactions between artificial agents and other elements of the system. The specific challenges and opportunities of embedding values in AI systems are discussed, and some lessons for better embedding values in AI systems are drawn. Keywords  Artificial intelligence · Values · Ethics · Sociotechnical system · Value embedding · Institution · Artificial agent · Norms · Multi-agent system

1 Introduction Nowadays, a lot of attention is being given to ethical issues, and more broadly to values, in the design and deployment of artificial intelligence (AI). Recently, the EU High-Level Expert Group on AI (2019: 12) formulated four ethical principles that AI applications should meet: respect for human autonomy, prevention of harm, fairness and explicability. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019: 4) also recently formulated a number of high-level principles, * Ibo van de Poel [email protected] 1



Department of Values, Technology, & Innovation, School of Technology, Policy & Management, Technical University Delft, Delft, The Netherlands

13

Vol.:(0123456789)



I. van de Poel

including human rights, well-being, data agency, transparency, and accountability. These values, as well as relevant others such as security and sustainability, are supposed to guide the governance and design of new AI technologies. But how can we verify or at least assess whether AI systems indeed embody these values? The question of whether and how technologies embody values is not new. It has been discussed in the philosophy of technology, where several accounts have been developed (e.g., Winner 1980; Floridi and Sanders 2004; Flanagan et  al. 2008; Klenk 2020; for an overview of several accounts, see Kroes and Verbeek 2014). Some authors deny that technologies are, or can be, value-laden (e.g., Pitt 2014; for a criticism, see Miller 2020), while others see technologies as imbued with values due to the way they have be