Responsible AI and moral responsibility: a common appreciation

  • PDF / 595,267 Bytes
  • 5 Pages / 595.276 x 790.866 pts Page_size
  • 39 Downloads / 201 Views

DOWNLOAD

REPORT


OPINION PAPER

Responsible AI and moral responsibility: a common appreciation Daniel W. Tigard1  Received: 19 August 2020 / Accepted: 2 September 2020 © The Author(s) 2020

Abstract Responsibility is among the most widespread buzzwords in the ethics of artificial intelligence (AI) and robotics. Yet, the term often remains unsubstantiated when employed in these important technological domains. Indeed, notions like ‘responsible AI’ and ‘responsible robotics’ may sound appealing, for they seem to convey a sense of moral goodness or ethical approval, thereby inciting psychological connections to self-regulation, social acceptance, or political correctness. For AI and ethics to come together in truly harmonious ways, we will need to work toward establishing a common appreciation. In this commentary, I breakdown three varieties of the term and invoke insights from the analytic ethics literature as a means of offering a robust understanding of moral responsibility in emerging technology. While I do not wish to accuse any parties of incorrect usage, my hope is that together researchers in AI and ethics can be better positioned to appreciate and to develop notions of responsibility for technological domains. Keywords  Responsible AI · Responsible robotics · Technology ethics · AI ethics · Moral responsibility

1 Introduction ‘Responsible AI’, ‘responsible robotics’, ‘responsible research and innovation’, ‘responsible technology’: these notions have garnered widespread attention in recent years, within and beyond academic settings [3, 9, 16, 17, 19]. To a large extent, the growing popularity of the buzzwords is understandable. A great deal of uncertainty, and perhaps anxiety, along with efforts to quell the fears has arisen in discussions of emerging technologies, particularly surrounding AI and robotics. Yet, the idea of responsibility is often unsubstantiated in these discussions [6], and indeed, it appears to be employed as a placeholder for notions like moral goodness or ethical approval, thereby inciting psychological connections to self-regulation, social acceptance, or political correctness. Being responsible is certainly much more than being morally good, and responsibility may well be ascribed to things which are far from being ethically approvable. To be sure, an individual might be appropriately considered * Daniel W. Tigard [email protected] 1



Institute for History and Ethics of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany

responsible for committing moral atrocities. Accordingly, for AI research and ethics to come together in truly harmonious ways, researchers across disciplines will need to work toward establishing a common appreciation of this key concept. In this commentary, I breakdown three varieties of responsibility and invoke insights from the analytic ethics literature as a means of offering a robust understanding of moral responsibility for applications to technology. With this agenda, my goal is not to accuse anyone of incorrect usage. Rather, I aim to hel