Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles

  • PDF / 590,903 Bytes
  • 12 Pages / 439.37 x 666.142 pts Page_size
  • 108 Downloads / 139 Views

DOWNLOAD

REPORT


Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles Veljko Dubljević1

© Springer Nature B.V. 2020

Abstract Autonomous vehicles (AVs)—and accidents they are involved in—attest to the urgent need to consider the ethics of artificial intelligence (AI). The question dominating the discussion so far has been whether we want AVs to behave in a ‘selfish’ or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The agent–deed–consequence (ADC) model (Dubljević and Racine in AJOB Neurosci 5(4):3–20, 2014a, Behav Brain Sci 37(5):487–488, 2014b) provides a promising descriptive and normative account while also lending itself well to implementation in AI. The ADC model explains moral judgments by breaking them down into positive or negative intuitive evaluations of the agent, deed, and consequence in any given situation. These intuitive evaluations combine to produce a positive or negative judgment of moral acceptability. For example, the overall judgment of moral acceptability in a situation in which someone committed a deed that is judged as negative (e.g., breaking a law) would be mitigated if the agent had good intentions and the action had a good consequence. This explains the considerable flexibility and stability of human moral judgment that has yet to be replicated in AI. This paper examines the advantages and disadvantages of implementing the ADC model and how the model could inform future work on ethics of AI in general. Keywords  Agent–deed–consequence (ADC) model · Autonomous vehicles (AVs) · Artificial intelligence (AI) · Artificial neural networks · Artificial morality · Neuroethics

* Veljko Dubljević [email protected] 1



Science Technology and Society Program, Department of Philosophy and Religious Studies, NC State University, 453 Withers Hall, 101 Lampe Dr, Raleigh, NC 27607, USA

13

Vol.:(0123456789)



V. Dubljević

Introduction Autonomous vehicles (AVs) (Deng 2015)—and accidents they are involved in— attest to the urgent need to consider the ethics of artificial intelligence (AI). The problems that need to be addressed regarding their role in society include the extent to which they will be used and the ethical algorithms they will be programmed to follow (Wallach 2008). The most notable ethical question discussed so far has been whether we want them to behave in a ‘selfish’ (i.e., protect the owner and their property at all costs) or utilitarian manner (i.e., reduce the number of lives lost at all costs). Indeed, Shariff and colleagues argue that “[p]eople are torn between how they want autonomous vehicles to ethically behave; they morally believe the vehicles should operate under utilitarian principles, but prefer to buy vehicles that prioritize their own lives as passengers” (Shariff et al. 2017). Thus, society is faced with an ethical quandary, which could be resolved by several ways (Waldrop 2015). One is co