Artificial superintelligence and its limits: why AlphaZero cannot become a general agent
- PDF / 625,212 Bytes
- 9 Pages / 595.276 x 790.866 pts Page_size
- 33 Downloads / 151 Views
OPEN FORUM
Artificial superintelligence and its limits: why AlphaZero cannot become a general agent Karim Jebari1 · Joakim Lundborg2 Received: 4 February 2020 / Accepted: 1 September 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe (i.e., an event comparable in value to that of human extinction). Among those concerned about existential risk related to artificial intelligence (AI), it is common to assume that AI will not only be very intelligent, but also be a general agent (i.e., an agent capable of action in many different contexts). This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between belief and desire in the context of machine agency. One such difference is that while an agent can by itself acquire new beliefs through learning, desires need to be derived from preexisting desires or acquired with the help of an external influence. Such influence could be a human programmer or natural selection. We argue that to become a general agent, a machine needs productive desires, or desires that can direct behavior across multiple contexts. However, productive desires cannot sui generis be derived from non-productive desires. Thus, even though general agency in AI could in principle be created by human agents, general agency cannot be spontaneously produced by a non-general AI agent through an endogenous process (i.e. self-improvement). In conclusion, we argue that a common AI scenario, where general agency suddenly emerges in a non-general agent AI, such as DeepMind’s superintelligent board game AI AlphaZero, is not plausible. Keywords Artificial general intelligence · Superintelligence · Agency · The belief/desire model · Intentional action · Existential risk
1 The intelligence explosion An intelligent machine surpassing human intelligence has been proposed as a major existential risk, most prominently by Bostrom (2014). Such a machine, it is argued, would not only be very intelligent, but also be a general agent (i.e., an agent capable of action in many different contexts). This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. A number of scholars, most prominently Turing (1950), have argued that humanity will eventually create an artificial
* Karim Jebari [email protected]; [email protected] Joakim Lundborg [email protected] 1
Institute for Futures Studies, Holländargatan 13, 101 31 Stockholm, Sweden
Meniga, Stockholm, Sweden
2
intelligence (AI) with greater intelligence than that of any human. Such a machine would have, it is argued, superior ability to improve itself and/or to create improved versions of itself. If this happens, the machine could not only improve its abilities to solve a wide range of problems
Data Loading...