The rise of artificial intelligence and the crisis of moral passivity

  • PDF / 328,577 Bytes
  • 3 Pages / 595.276 x 790.866 pts Page_size
  • 3 Downloads / 173 Views

DOWNLOAD

REPORT


OPEN FORUM

The rise of artificial intelligence and the crisis of moral passivity Berman Chan1  Received: 10 January 2020 / Accepted: 14 February 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract In “The rise of the robots and the crisis of moral patiency”, John Danaher argues that the rise of AI and robots will dramatically suppress our moral agency and encourage the expression of moral passivity. This discussion note argues that Danaher needs to strengthen his argument by supporting two key assumptions, that (a) AI will otherwise be friendly or neutral (instead of simply destroying humans), and that (b) humans will largely succumb to the temptation of over-relying upon AI for motivation and decision-making in their personal lives. Keywords  Robotics · Artificial intelligence · Technological unemployment · Moral agents · Moral patients · Friendly AI It is an understatement to say that artificial intelligence is being used more and more in our world. Even though artificial intelligence technology has not reached anywhere near its full potential, several writers look to the distant future and envision all sorts of different impacts on humans. John Danaher (2019a) discusses one particular impact upon humanity, defending the thesis that the rise of AI and robots will “suppress our moral agency and increase the expression of moral patiency” (2019a, p. 133). Danaher argues for this by contending that the dramatic increase in our moral patiency will come as the result of AI’s intrusion into three significant human arenas for moral agency: (1) the workplace and employment, (2) political, legal, and bureaucratic decisionmaking, and (3) leisure and personal activities. Intrusion into these arenas corresponds to three trends coinciding with the rise of the robots. I will argue, however, that Danaher’s argument based upon the first trend needs to provide more support for the assumption that robots will pursue a certain kind of takeover (i.e. friendly or neutral) so as not to provoke human resistance and the exercise of our moral agency. Moreover, I argue that Danaher’s argument based on the third trend needs to provide an argument for the important assumption that most people will succumb to the temptation of overly relying upon AI for personal motivation and decision-making. * Berman Chan [email protected] 1



Department of Philosophy, Purdue University, 100 N University St., West Lafayette, IN 47907, USA

Let us briefly discuss some terminology. Danaher distinguishes robots, which are systems capable of acting in the world, from AI, which need not have this capability. Instead, the latter are typically confined to a computer which assists or offers instructions to humans (2019a, p. 130). However, Danaher uses the two terms interchangeably (so have I above, granting him the connection) and often refers to both at the same time as he argues that the crisis of moral patiency will result from our increased reliance upon both. A moral patient is “a being who possesses some moral status […] but who doe