HRI ethics and type-token ambiguity: what kind of robotic identity is most responsible?

  • PDF / 783,538 Bytes
  • 10 Pages / 595.276 x 790.866 pts Page_size
  • 105 Downloads / 187 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

HRI ethics and type-token ambiguity: what kind of robotic identity is most responsible? Thomas Arnold1   · Matthias Scheutz1

© Springer Nature B.V. 2018

Abstract This paper addresses ethical challenges posed by a robot acting as both a general type of system and a discrete, particular machine. Using the philosophical distinction between “type” and “token,” we locate type-token ambiguity within a larger field of indefinite robotic identity, which can include networked systems or multiple bodies under a single control system. The paper explores three specific areas where the type-token tension might affect human–robot interaction, including how a robot demonstrates the highly personalized recounting of information, how a robot makes moral appeals and justifies its decisions, and how the possible need for replacement of a particular robot shapes its ongoing role (including how its programming could transfer to a new body platform). We also consider how a robot might regard itself as a replaceable token of a general robotic type and take extraordinary actions on that basis. For human–robot interaction robotic type-token identity is not an ontological problem that has a single solution, but a range of possible interactions that responsible design must take into account, given how people stand to gain and lose from the shifting identities social robots will present. Keywords  Robot ethics · Artificial moral agents · Human–robot interaction · Robotic design

Introduction In his famous consideration of the mechanical reproduction of art Walter Benjamin remarks, “Even the most perfect reproduction of a work of art is lacking in one element: its presence in time and space, its unique existence at the place where it happens to be” (Benjamin et al. 1970). Few would view a social robot primarily as a work of art, but Benjamin’s simple observation touches upon a key ethical point for the study of human–robot interaction and robotic design. Robots are not special for being manufactured or “mechanically reproduced,” of course. Unlike the Mona Lisa or Starry Night, there need be no one original, authentic robot from which all others derive (though the Apple museum, for example, testifies to how the human thirst for original relics even applies to technological devices). On * Thomas Arnold [email protected] Matthias Scheutz [email protected] 1



Human‑Robot Interaction Laboratory, Department of Computer Science, Tufts University, 200 Boston Avenue, Medford, MA 02155, USA

the contrary, the very promise of robots to perform reliably, predictably, and effectively seems to hinge on each one of a certain kind being made to the same specifications. What makes Benjamin’s remark apposite, instead, is the crucial role of presence. Social robots do not hang on a wall but function as mobile, interactive systems, sharing time and space with particular users and interactants. It is clear that interactions with a robot can elicit great emotional investment from people, from military funerals to household gifts