The State Space of Artificial Intelligence
- PDF / 752,403 Bytes
- 23 Pages / 439.37 x 666.142 pts Page_size
- 6 Downloads / 184 Views
The State Space of Artificial Intelligence Holger Lyre1 Received: 11 October 2019 / Accepted: 19 August 2020 © The Author(s) 2020
Abstract The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is recognized as generalization, the possibility to go over from specific to more general types of problems. A third dimension is semantic grounding. Our overall analysis connects to a number of known foundational issues in the philosophy of mind and cognition: the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and use theories of meaning. It shall finally be argued that the dimension of grounding decomposes into three sub-dimensions. And the dimension of self-learning turns out as only one of a whole range of “self-x-capacities” (based on ideas of organic computing) that span the self-x-subspace of the full AI state space. Keywords Artificial intelligence · Deep learning · Self-learning · Semantic grounding · State space of AI · Self-x-property · Self-x-capacity
1 Introduction There is much to suggest that 15 March 2016 should be regarded as a historical date. On this day Lee Sedol, one of the strongest Go players in the world, lost the last game of a tournament lasting several days against the “AlphaGo” AI system of the development company Google DeepMind. AlphaGo defeated the South Korean champion, 4 games to 1. The event attracted worldwide attention and brought back memories of the victory of IBM’s “Deep Blue” against the then reigning world * Holger Lyre [email protected] 1
Chair of Theoretical Philosophy & Center for Behavioral Brain Sciences, University of Magdeburg, Magdeburg, Germany
13
Vol.:(0123456789)
H. Lyre
chess champion Garri Kasparov some 20 years earlier. And yet the similarity of both events is rather superficial. DeepBlue owed its success to pre-implemented heuristic search combined with brute computational power, a strategy that is impossible for Go due to its sheer complexity. It is said that Go is to chess as chess is to checkers. Consequently, AlphaGo is based on a deep learning neural network (DL network), while DeepBlue was a classic rule-based and symbolic AI system. DL networks belong to the latest development in neural network research. They are called “deep” as they consist of more than just two or three, sometimes even hundreds of layers. DL networks comprise various types of architectures such as feedforward, recurrent and convolutional neural networks (cf. Goodfellow et al. 2016, LeCun et al. 2015). The breathtaking successes of DL applications in the last 10 years have led to what Sejnowski (2018) calls the
Data Loading...