Autonomous reboot: Aristotle, autonomy and the ends of machine ethics

  • PDF / 594,751 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 35 Downloads / 277 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

Autonomous reboot: Aristotle, autonomy and the ends of machine ethics Jeffrey White1,2 Received: 6 January 2020 / Accepted: 6 July 2020 © The Author(s) 2020

Abstract Tonkens (Mind Mach, 19, 3, 421–438, 2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe—"rational" and "free"— while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. Wallach pushes for redoubled efforts toward a comprehensive account of ethics to guide machine ethicists on the issue of artificial moral agency. Options, thus, present themselves: reinterpret traditional ethics in a way that affords a comprehensive account of moral agency inclusive of both artificial and natural agents, or give up on the possibility and “muddle through” regardless. This series of papers pursues the first option, meets Tonkens’ "challenge" and pursues Wallach’s ends through Beavers’ proposed means, by "landscaping" traditional moral theory in resolution of a comprehensive account of moral agency. This first paper sets out the challenge and establishes the tradition that Kant had inherited from Aristotle, briefly entertains an Aristotelian AMA, fields objections, and ends with unanswered questions. The next paper in this series responds to the challenge in Kantian terms, and argues that a Kantian AMA is not only a possibility for Machine ethics research, but a necessary one. Keywords  Autonomy · Artificial moral agent · AMA · Machine ethics

1 Introduction Only the descent into the hell of self-cognition can pave the way to godliness. – Immanuel Kant.1 Understanding subjective human morality has been a focus of traditional ethics since the Greeks. To engineer this condition into artificial agents is one aim of research into artificial agency, and it may also be the best way to understand human morality and moral theory at the same time, with successes in these efforts anticipated in related fields, for example, in advancing work in computational modeling * Jeffrey White [email protected] 1



Department of Philosophy, The University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands



Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology Graduate University, 1919‑1 Tancha, Onna‑son, Okinawa 904‑0495, Japan

2

of social agency and of psychologically realistic sociopolitical structures in effective practical policy making (c.f. Naveh and Sun 2006; Sun 2013, 2020; White 2016, 2020; Han et al. 2019, 2020; Pereira and Saptawijaya 2015; Pereira 2019). Yet, it is unclear how to engineer moral autonomy into artificial agents, not to mention what to do if we do