Autonomous weapons systems and the moral equality of combatants
- PDF / 684,399 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 42 Downloads / 220 Views
ORIGINAL PAPER
Autonomous weapons systems and the moral equality of combatants Michael Skerker1 · Duncan Purves2 · Ryan Jenkins3
© Springer Nature B.V. 2020
Abstract To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS. Keywords Lethal autonomous weapons · Moral equality of combatants · Just war theory · Military ethics Scholars have voiced objection to the development and deployment of fully autonomous weapon systems (AWS) largely for reasons having to do with technical limitations or political ramifications. For example, some have raised concerns that such systems might make political leaders more cavalier about engaging in military action;1 that non-state actors would readily be able get ahold of small and relatively cheap AWS;2 or that such systems would be vulnerable to hacking. Still other scholars raise principled objections to the use of AWS, suggesting that such systems should not be deployed even if the political and technical concerns could be addressed or shown to be no different in kind than concerns about widely accepted long-distance weaponry.
* Ryan Jenkins [email protected] Michael Skerker [email protected] Duncan Purves [email protected] 1
US Naval Academy, Annapolis, USA
2
University of Florida, Gainesville, USA
3
Cal Poly State University, San Luis Obispo, USA
Some have worried that AWS would not be able to adhere to principles of Distinction and Proportionality and would thereby violate the Laws of Armed Conflict or jus in bello restrictions;3 that there would be no accountability for war crimes perpetrated by robots;4 and that the use of AWS demonstrates a “profound disrespect” for the enemy.5 To many, the idea of robots autonomously deciding to kill human beings is grotesque, and yet critics have had difficulty explaining why it
Data Loading...