In AI We Trust: Ethics, Artificial Intelligence, and Reliability

  • PDF / 661,843 Bytes
  • 19 Pages / 439.37 x 666.142 pts Page_size
  • 26 Downloads / 240 Views

DOWNLOAD

REPORT


In AI We Trust: Ethics, Artificial Intelligence, and Reliability Mark Ryan1  Received: 16 December 2019 / Accepted: 26 May 2020 © The Author(s) 2020

Abstract One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them. Keywords  Artificial intelligence ethics · Trustworthy AI · European commission high-level expert group · Philosophy of trust · Reliability

Introduction One of the main difficulties with analysing the ethical impact of artificial intelligence (AI) is overcoming the tendency to anthropomorphise it. The media is enthralled by images of machines that can do what we can, and often, far better. We are bombarded with novels, movies and television shows depicting sentient robots, so it is not surprising that we associate, categorise, and define these machines in human terms. While people associate human activities and abilities to machines, it * Mark Ryan [email protected] 1



The Division of Philosophy, KTH Royal Institute of Technology, Stockholm, Sweden

13

Vol.:(0123456789)



M. Ryan

becomes problematic when this anthropomorphisation is attached to human moral activities, such as trust. Organisations, such as the European Commission’s High-level Expert Group on AI (HLEG),1 have adopted the position that AI is something that we can, and should, trust (HLEG 2019, p. 35). However, this requires that ‘all actors and processes [including the AI technology itself] that are part of the system’s socio-technical context throughout its entire life cycle [emphasis added]’ are trustworthy (HLEG 2019, p. 5). The HLEG state that while trustworthiness is not typically a property ascribed to machines, they want to ascribe it to AI. They propose that there are three main characteristics of trusting AI: • The AI technology itself; • Designers and organisations behind the development, deployment and use of AI