Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects

  • PDF / 879,485 Bytes
  • 21 Pages / 595.276 x 790.866 pts Page_size
  • 88 Downloads / 190 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

Operations of power in autonomous weapon systems: ethical conditions and socio‑political prospects Nik Hynek1 · Anzhelika Solovyeva1 Received: 3 May 2020 / Accepted: 28 July 2020 © The Author(s) 2020

Abstract The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban have intensified along with acts of resistance and blocking coalitions. This article aims to reflect on the prospects and limitations, as well as the ethical and legal intensity, of the emerging regulatory framework. To allow for such an investigation, a power-analytical approach to studying international security regimes is utilized. Keywords  Artificial intelligence · Autonomous weapon systems · Campaign to stop killer robots · Ethics · International security regimes · Power analysis

1 Introduction This article aims to inquire into a highly topical and hotly debated contemporary security issue: autonomous weapon systems (AWS), alternatively known as weaponized artificial intelligence (AI) or, increasingly more so, as lethal AI. In particular, the focus is on the dynamics and prospects of global regulation, or rather proscription, of this emerging technology. At the core of this analysis are two social structures which AWS get embedded with and concurrently impact: ethics and law. Currently in development, AWS—aka Killer Robots— can be differentiated from all other weapon categories by a unique combination of attributes. First, they are fully autonomous (Kastan 2013, p.49). This presupposes their ability to engage in autonomous (lethal) decision-making (Asaro * Anzhelika Solovyeva [email protected] Nik Hynek [email protected] 1



Charles University Research Centre of Excellence, Department of Security Studies, Faculty of Social Sciences, Charles University, U Krize 8, 15800 Praha 5, Jinonice, Prague, Czech Republic

2012, p.690), autonomous (lethal) targeting (Sharkey 2012, p.787) and autonomous (lethal) force (Sharkey 2010, p.370). While still containing a considerable theoretical aspect, their autonomy may include the ability to operate without human control or supervision in dynamic, unstructured and open environments (Altmann and Sauer 2017, p.118). Second, they can be used as offensive autonomous weapons (FLI 2015). Last but not least, these are advances in AI that have paved the way for and characterize fully autonomous (lethal) weapon systems (O’Connell 2014, p.526; Walsh 2015, p.2). The Campaign to Stop Killer Robots features ‘the latest in a series of transnational advocacy campaigns in the area of human