Making moral machines: why we need artificial moral agents

  • PDF / 722,605 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 67 Downloads / 228 Views

DOWNLOAD

REPORT


OPEN FORUM

Making moral machines: why we need artificial moral agents Paul Formosa1 · Malcolm Ryan2 Received: 19 June 2020 / Accepted: 8 October 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis of the relevant arguments for and against creating AMAs, and we argue that all things considered we have strong reasons to continue to responsibly develop AMAs. The key contributions of this paper are threefold. First, to provide the first comprehensive response to the important arguments made against AMAs by Wynsberghe and Robbins (in “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics 25, 2019) and to introduce several novel lines of argument in the process. Second, to collate and thematise for the first time the key arguments for and against AMAs in a single paper. Third, to recast the debate away from blanket arguments for or against AMAs in general, to a more nuanced discussion about the use of what sort of AMAs, in what sort of contexts, and for what sort of purposes is morally appropriate. Keywords  Artificial moral agents (AMA) · Artificial intelligence (AI) · Moral machines · Robot ethics · Machine ethics · Autonomous vehicle ethics

1 Introduction As robots and Artificial Intelligences (AIs) become more enmeshed in rich social contexts, it seems inevitable that they will become moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should pursue it. In a recent paper, Wynsberghe and Robbins (2019, p. 732) claim that “the motivations for developing moral machines do not withstand closer inspection” and thus “machine ethicists need to provide better reasons". We respond to this important challenge and try to provide those better reasons here as follows. First, we clarify what is meant by a moral machine or Artificial Moral Agent (AMA). We then look at nine reasons against creating * Paul Formosa [email protected] 1



Department of Philosophy, Macquarie University, Sydney, NSW 2109, Australia



Department of Computing, Macquarie University, Sydney, NSW 2109, Australia

2

AMAs which are found in the relevant literature and we respond to each concern. Having weakened the negative case against AMAs, we then outline the positive case by examining seven reasons in favour of AMAs that Wynsberghe and Robbins try to reject, and we develop counter responses to each of their concerns. We conclude by claiming that the overall case for responsibly deve