Representation, justification, and explanation in a value-driven agent: an argumentation-based approach
- PDF / 2,143,618 Bytes
- 15 Pages / 595.276 x 790.866 pts Page_size
- 24 Downloads / 157 Views
OPINION PAPER
Representation, justification, and explanation in a value‑driven agent: an argumentation‑based approach Beishui Liao1 · Michael Anderson2 · Susan Leigh Anderson3 Received: 11 July 2020 / Accepted: 21 July 2020 © Springer Nature Switzerland AG 2020
Abstract Ethical and explainable artificial intelligence is an interdisciplinary research area involving computer science, philosophy, logic, and social sciences, etc. For an ethical autonomous system, the ability to justify and explain its decision-making is a crucial aspect of transparency and trustworthiness. This paper takes a Value-Driven Agent (VDA) as an example, explicitly representing implicit knowledge of a machine learning-based autonomous agent and using this formalism to justify and explain the decisions of the agent. For this purpose, we introduce a novel formalism to describe the intrinsic knowledge and solutions of a VDA in each situation. Based on this formalism, we formulate an approach to justify and explain the decisionmaking process of a VDA, in terms of a typical argumentation formalism, Assumption-based Argumentation (ABA). As a result, a VDA in a given situation is mapped onto an argumentation framework in which arguments are defined by the notion of deduction. Justified actions with respect to semantics from argumentation correspond to solutions of the VDA. The acceptance (rejection) of arguments and their premises in the framework provides an explanation for why an action was selected (or not). Furthermore, we go beyond the existing version of VDA, considering not only practical reasoning, but also epistemic reasoning, such that the inconsistency of knowledge of the VDA can be identified, handled, and explained. Keywords Ethical AI · Value-driven agent · Explainable AI · Formal argumentation · Assumption-based argumentation
1 Introduction Ethical, explainable artificial intelligence is an increasingly active research area in recent years. An autonomous agent should make decisions by determining ethically preferable behavior [1, 23]. Furthermore, it is expected to provide explanations to human beings about how and why the decisions are made [8]. Explainable AI is an interdisciplinary research direction, involving computer science, philosophy, A preliminary version of this paper has been orally presented in the 22nd International Conference on Principles and Practice of Multi-Agent Systems (PRIMA 2019), and uploaded to arXiv at https://arxiv.org/abs/1812.05362, but has not been formally published yet. * Beishui Liao [email protected] 1
Zhejiang University, Hangzhou 310028, People’s Republic of China
2
University of Hartford, Hartford, CT, USA
3
University of Connecticut, Storrs, CT, USA
cognitive psychology/science, and social psychology [26]. In recent years, different approaches have been proposed to provide explanations for autonomous systems, though most of them are still rather preliminary. For instance, [14] proposed an architecture combining artificial neural networks and argumentation for solving bin
Data Loading...