We need to talk about deception in social robotics!

  • PDF / 619,883 Bytes
  • 8 Pages / 595.276 x 790.866 pts Page_size
  • 72 Downloads / 172 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

We need to talk about deception in social robotics! Amanda Sharkey1   · Noel Sharkey1 Accepted: 29 October 2020 © The Author(s) 2020

Abstract Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when it leads to harmful impacts on individuals and society. The appearance and behaviour of a robot can lead to an overestimation of its functionality or to an illusion of sentience or cognition that can promote misplaced trust and inappropriate uses such as care and companionship of the vulnerable. We consider the allocation of responsibility for harmful deception. Finally, we make the suggestion that harmful impacts could be prevented by legislation, and by the development of an assessment framework for sensitive robot applications. Keywords  Robot · Deception · Intentional deception · Harm · Robotics · False belief · Prevention · Social robotics · Illusion “Most of the evil in this world is done by people with good intentions.”― T.S. Eliot.

Introduction According to a number of authors (e.g. Matthias 2015; Sparrow and Sparrow 2006; Sparrow 2002; Wallach and Allen 2009; Sharkey and Sharkey 2011), the development and creation of social robots often involves deception. By contrast, some have expressed doubts about the prevalence of deception in robotics (e.g. Collins 2017; Sorell and Draper 2017). It seems that there is disagreement in the field about what counts as deception, and whether or when it should be avoided. The 4th principle of the U.K. Engineering and Physical Sciences Research Council’s (EPSRC) (Boden et al. 2017) ‘principles of robotics’ states that ‘Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be made transparent’. Although this principle is a step in the * Amanda Sharkey [email protected] 1



Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK

right direction, there is a need for a more detailed consideration of what constitutes deception in social robotics, when it is wrong, who should be held responsible, and whether it can be prevented or avoided. A social robot is a physically embodied robot that is able to socially interact with people. Wallach and Allen (2009) hold that any techniques enabling robots to detect basic human social gestures and to respond with human-like social cues, “are arguably forms of deception” (pp 44). Matthias (2015) suggests that a robot that appears to have mental or emotional capabilities that it does not really have, is implicated “in a kind of deception” (pp 17). Grodzinsky et al. (2015) declare tha