Can We Agree on What Robots Should be Allowed to Do? An Exercise in Rule Selection for Ethical Care Robots

  • PDF / 724,328 Bytes
  • 10 Pages / 595.276 x 790.866 pts Page_size
  • 26 Downloads / 167 Views

DOWNLOAD

REPORT


Can We Agree on What Robots Should be Allowed to Do? An Exercise in Rule Selection for Ethical Care Robots Dieter Vanderelst1 · Jurgen Willems2 Accepted: 21 November 2019 © The Author(s) 2019

Abstract Future Care Robots (CRs) should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment. Some progress has been made on methods to guarantee robots comply with a predefined set of ethical rules. In contrast, methods for selecting these rules are lacking. Approaches departing from existing philosophical frameworks, often do not result in implementable robotic control rules. Machine learning approaches are sensitive to biases in the training data and suffer from opacity. Here, we propose an alternative, empirical, survey-based approach to rule selection. We suggest this approach has several advantages, including transparency and legitimacy. The major challenge for this approach, however, is that a workable solution, or social compromise, has to be found: it must be possible to obtain a consistent and agreed-upon set of rules to govern robotic behavior. In this article, we present an exercise in rule selection for a hypothetical CR to assess the feasibility of our approach. We assume the role of robot developers using a survey to evaluate which robot behavior potential users deem appropriate in a practically relevant setting, i.e., patient non-compliance. We evaluate whether it is possible to find such behaviors through a consensus. Assessing a set of potential robot behaviors, we surveyed the acceptability of robot actions that potentially violate a patient’s autonomy or privacy. Our data support the empirical approach as a promising and cost-effective way to query ethical intuitions, allowing us to select behavior for the hypothetical CR. Keywords Ethical robots · Assistive robots · Ethical dilemma · Care-robot

1 Introduction Care Robots (CRs) have been proposed as a means of relieving the disproportional demand the growing group of elderly people places on health services (e.g. [13,29,31,58]). In the future, CRs might work alongside professional health workers in both hospitals and care homes. However, the most desirable scenario is for CRs to help improving care delivery Electronic supplementary material The online version of this article (https://doi.org/10.1007/s12369-019-00612-0) contains supplementary material, which is available to authorized users.

B

Jurgen Willems [email protected] Dieter Vanderelst [email protected]

1

Department of Psychology, University of Cincinnati, Cincinnati, Ohio, USA

2

Institute for Public Management and Governance, Vienna University of Economics and Business, Vienna, Austria

at home and reduce the burden on informal caregivers. In this way, CRs will not only aid in dealing with the unsustainable increase in health care expenses. By allowing patients to live longer at home, CRs could increase patient autonomy and self-management [10]—and possibly impro