AI and ethics

  • PDF / 568,987 Bytes
  • 5 Pages / 595.276 x 790.866 pts Page_size
  • 103 Downloads / 215 Views

DOWNLOAD

REPORT


OPINION PAPER

AI and ethics Susan Leigh Anderson1 · Michael Anderson2 Received: 10 August 2020 / Accepted: 17 August 2020 © Springer Nature Switzerland AG 2020

Abstract Since it is of critical importance that autonomous systems, whether software or hardware, that interact with human beings (and perhaps other sentient beings as well) behave in an ethical manner, we consider six possible approaches to effecting this. We argue that the first five approaches are unsatisfactory and defend the last approach, the approach we have taken. It involves discovering ethically relevant features and corresponding prima facie duties present in the various possible actions such a system could take in particular domains and discovering decision principles for when there is a conflict between those duties. We, further, maintain that there are a number of additional benefits to taking this approach that involve becoming clearer about human ethics, in addition to the ethics to which autonomous systems should adhere, and the chance that it might well lead to providing inspiration for humans to behave more ethically. Keywords  AI · Ethics · Machine ethics · Autonomous systems

1 Introduction There are many necessary activities that we would like to be able to turn over entirely to autonomously functioning machines, because the jobs that need to be done are either too dangerous or unpleasant for humans to perform, or there is a shortage of humans to perform the jobs, or machines could do a better job performing the tasks than humans. We must ensure, however, that they carry out their tasks in an ethical manner. For many, ethical issues are thought to only arise in “life or death” situations. We believe that this is incorrect. Whenever the actions of an autonomous system, software or hardware, that interacts with humans (and perhaps other sentient beings as well) could adversely or positively affect them, it is a matter of ethical concern. Since this is the case with each action it takes (even, for example, when an eldercare robot decides to recharge its batteries, because it is not * Susan Leigh Anderson [email protected] Michael Anderson [email protected] 1



Department of Philosophy, University of Connecticut, Storrs, CT, USA



Department of Computer Science, University of Hartford, Hartford, CT, USA

2

doing something else at that moment that might be ethically preferable), all of its actions should be ethically evaluated. Ethics is concerned with determining which action or policy would be the best one, given a particular set of circumstances, not just with preventing an undesirable outcome. Therefore, using the primary rule in biomedical ethics (“first, do no harm”), which some have argued for, is not ideal. Consider the example of self-driving cars. Since there are bound to be some accidents, causing harm does that mean that they should not be developed and put into practice? We ought to compare the number of deaths and injuries there are now with human drivers with what would likely happen with only self-driving c