Can we program or train robots to be good?
- PDF / 749,723 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 14 Downloads / 246 Views
ORIGINAL PAPER
Can we program or train robots to be good? Amanda Sharkey1
© The Author(s) 2017. This article is an open access publication
Abstract As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as ’ethical’, or ’minimally ethical’ are considered, although they are found to operate only in quite constrained and limited application domains. There is a general recognition that current robots cannot be described as full moral agents, but it is less clear whether will always be the case. Concerns are raised about the insufficiently justified use of terms such as ’moral’ and ’ethical’ to describe the behaviours of robots that are often more related to safety considerations than to moral ones. Given the current state of the art, two possible responses are identified. The first involves continued efforts to develop robots that are capable of ethical behaviour. The second is to argue against, and to attempt to avoid, placing robots in situations that demand moral competence and an understanding of the surrounding social situation. There is something to be gained from both responses, but it is argued here that the second is the more responsible choice. Keywords Ethics · Moral competence · Robot · Decisionmaking · Minimally ethical
* Amanda Sharkey [email protected] 1
Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK
Introduction Our increasing deployment of and reliance on robots means that there is a pressing need for a clear position on the possibility of developing robots that can be described as ‘good’ or ‘ethical’. High profile concerns have been raised about the potential impact of artificially intelligent systems on humans, and arguments have been made about the need to constrain the behaviour of such systems (e.g. Bostrom 2014; Russell 2016). Two areas in which there is a growing awareness of the extent to which robotics can directly impinge on the health and safety of humans are those involving (i) autonomous vehicles and (ii) robotic weapons, especially ‘autonomous’ robot weapons. It is apparent that autonomous cars are likely to encounter situations in which it is necessary to make life or death decisions about whether to protect themselves, or other humans (Lin 2013, 2015). And autonomous robotic weapons could be deployed in situations in which they make decisions about when to use lethal force, and who to kill (Sharkey 2012; Asaro 2012; Altmann et al. 2013). The stakes in such domains are high, and the issues important. Both self-driving cars, and lethal autonomous weapons, would directly affect the physical safety of human beings. But life or death decisions are not the only ways in which robots could affect human lives: their potential effe
Data Loading...