An emerging AI mainstream: deepening our comparisons of AI frameworks through rhetorical analysis

  • PDF / 658,035 Bytes
  • 12 Pages / 595.276 x 790.866 pts Page_size
  • 60 Downloads / 176 Views

DOWNLOAD

REPORT


OPEN FORUM

An emerging AI mainstream: deepening our comparisons of AI frameworks through rhetorical analysis Epifanio Torres1 · Will Penman1 Received: 30 July 2020 / Accepted: 27 August 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract Comparing frameworks for AI development allows us to see trends and reflect on how we are conceptualizing, interacting with, and imagining futures for AI. Recent scholarship comparing a range of AI frameworks has often focused methodologically on consensus, which has led to problems in evaluating potentially ambiguous values. We contribute to this scholarship using a rhetorical perspective attuned to how frameworks shape people’s actions. This perspective allows us to develop the concept of an “AI mainstream” through an analysis of five of the highest-profile frameworks, including Asimov’s Three Laws. We identify four features of this emerging AI mainstream shared by most/all of the frameworks: human-centered design focus, abstraction-oriented ethical reasoning, privileged authorship, and ahistorical regulatory justifications. Notably, each of these features permeates each framework, rather than being limited to a single principle. We then evaluate these shared features and offer scholarly alternatives to complement and improve them. Keywords  Human-centered · Abstraction · Authorship · History

1 Introduction With the ongoing development of artificial intelligence (AI) technologies, new frameworks for regulating AI are increasingly being developed. By one count, 36 frameworks were issued in 2018 alone, making 84 frameworks total since 2011 (Jobin et al. 2019). As a result, scholarship that does some initial sensemaking is helpful. Based on consensus among six high-profile frameworks, Floridi and Cowls (2019) identify five “core principles” (n.p.). Similarly, Jobin et al. (2019) use a consensus-driven analysis to identify eleven “overarching” ethical values/principles, with five principles referenced in more than half of the frameworks. And for Hagendorff (2020), it is consensus in 22 recent frameworks that helps uncover AI goals to critique. In the process of identifying consensus among frameworks, these scholars have also found it appropriate to make various critiques.

* Epifanio Torres [email protected] Will Penman [email protected] 1



Department of Computer Science, Princeton University, 35 Olden St, Princeton, NJ 08540, USA

Hagendorff puts it most strongly by asserting, “AI ethics is failing in many cases” (p. 113). Yet with such a strong methodological focus on consensus, existing scholarship has little way to differentiate between positive and negative convergence. Jobin et al. (2019), for instance, critique several areas in which the frameworks converge, including that many of them come from a similar geographic area and that many of them do not emphasize sustainability. But surprisingly, these critiques (and other critiques based on convergence) are relegated to a discussion section and actually represented in the abstract as divergenc