Fully Autonomous AI
- PDF / 611,505 Bytes
- 13 Pages / 439.37 x 666.142 pts Page_size
- 107 Downloads / 267 Views
Fully Autonomous AI Wolfhart Totschnig1
© Springer Nature B.V. 2020
Abstract In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the capacity to “give oneself the law,” to decide by oneself what one’s goal or principle of action will be. The predominant view in the literature on the long-term prospects and risks of artificial intelligence is that an artificial agent cannot exhibit such autonomy because it cannot rationally change its own final goal, since changing the final goal is counterproductive with respect to that goal and hence undesirable. The aim of this paper is to challenge this view by showing that it is based on questionable assumptions about the nature of goals and values. I argue that a general AI may very well come to modify its final goal in the course of developing its understanding of the world. This has important implications for how we are to assess the long-term prospects and risks of artificial intelligence. Keywords Artificial intelligence · Autonomy · Normativity · Goals In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. To create agents that are autonomous in this sense is the central aim of these fields. Until recently, the aim could be achieved only by restricting and controlling the conditions under which the agents will operate. The robots on an assembly line in a factory, for instance, perform their delicate tasks reliably because the surroundings have been meticulously prepared. Today, however, we are witnessing the creation of artificial agents that are designed to function in “real-world”—that is, uncontrolled—environments. Self-driving cars, which are already in use, and * Wolfhart Totschnig [email protected] 1
Universidad Diego Portales, Av. Ejército 260, Santiago, Chile
13
Vol.:(0123456789)
W. Totschnig
“autonomous weapon systems,” which are in development, are the most prominent examples. When such machines are called “autonomous,” it is meant that they are able to choose by themselves, without human intervention, the appropriate course of action in the manifold situations they encounter.1 This way of using the term “autonomy” goes along with the assumption that the artificial agent has a fixed goal or “utility function,” a set purpose with respect to which the appropriateness of its actions will be evaluated. So, in the first example, the agent’s purpose is to drive safely and efficiently from one place to another, and in the second example, it is to neutralize all and only enemy combatants i
Data Loading...