If robots are people, can they be made for profit? Commercial implications of robot personhood

  • PDF / 660,618 Bytes
  • 11 Pages / 595.276 x 790.866 pts Page_size
  • 110 Downloads / 198 Views

DOWNLOAD

REPORT


ORIGINAL RESEARCH

If robots are people, can they be made for profit? Commercial implications of robot personhood Bartlomiej Chomanski1  Received: 11 September 2020 / Accepted: 28 October 2020 © Springer Nature Switzerland AG 2020

Abstract It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any system of commercial production of such artificial persons would have to meet. It then shows that it is possible for these requirements to be met, and that doing so will make the commercial production of artificial persons permissible. Lastly, it briefly presents one potential blueprint for how such a framework could look like—inspired by the real-world model of compensating the training of athletes—and then addresses some objections to the view. Keywords  AI personhood · Robot rights · Robot ethics · Moral status of artificial systems

1 Introduction: commercializing machine personhood? This paper will consider an underexplored aspect of the ethics of artificial intelligence and robot rights. It will look at whether there is an ethically acceptable institutional framework that permits commercial production of artificially intelligent agents who deserve the same moral considerability as typical human beings (I will use the term “person” to refer to such entities, and the term “artificial persons,” or APs for short (AP singular), to refer to artificially intelligent agents who meet the criteria for moral considerability owed to persons). This seems to be an area whose normative dimensions are important to consider—if these artificial agents are ever built, it is possible that they will be produced commercially; or, at least, that there will be a temptation to do so (one may, of course, think that APs will be produced by scientific institutions or regular people; however, the question of when it would be permissible to build them even in such contexts * Bartlomiej Chomanski [email protected] 1



Rotman Institute of Philosophy, Western University, 1151 Richmond Street North, London, ON, Canada

persists). Questions about social policies concerning these agents will need to be answered in a satisfactory way to, first, avoid grave moral harms to entities with human-equivalent moral status, and, second, to find best ways of harnessing the invention to promote social welfare. The main aim of this paper is to defend the idea that a market for the production of APs is likely to conform to a normative framework consistent with our basic intuitions regarding the treatment of all persons (artificial or not).1 A note of caution should