Adaptive Learning for Efficient Emergence of Social Norms in Networked Multiagent Systems
This paper investigates how norm emergence can be facilitated by agents’ adaptive learning behaviors in networked multiagent systems. A general learning framework is proposed, in which agents can dynamically adapt their learning behaviors through social l
- PDF / 818,756 Bytes
- 14 Pages / 439.37 x 666.142 pts Page_size
- 87 Downloads / 190 Views
School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China [email protected], [email protected], [email protected] 2 Department of Mathematical and Computer Sciences, University of Tulsa, Tulsa, OK 74104, USA [email protected] 3 School of Computer Science and Software Engineering, University of Wollongong, Wollongong 2500, Australia [email protected]
Abstract. This paper investigates how norm emergence can be facilitated by agents’ adaptive learning behaviors in networked multiagent systems. A general learning framework is proposed, in which agents can dynamically adapt their learning behaviors through social learning of their individual learning experience. Extensive verification of the proposed framework is conducted in a variety of situations, using comprehensive evaluation criteria of efficiency, effectiveness and efficacy. Experimental results show that the adaptive learning framework is robust and efficient for evolving stable norms among agents.
Keywords: Norm emergence
1
· Learning · Multiagent systems
Introduction
Coordination of agent behaviors is central in Multiagent Systems (MASs). Social norm is an effective technique to achieve coordination in MASs by placing social constraints on agent action choices [1]. Understanding how social norms can emerge through local interactions has gained increasingly high attention in the research of MASs. Numerous investigations of norm emergence have been done in recent years under different assumptions about agent interaction protocols, societal topologies and observation capabilities [2–4]. Learning from individual experience has been shown to be a robust mechanism to enable norm emergence in MASs [5]. A great deal of work has studied norm emergence achieved through agent learning behaviors [6–13]. The focus of these existing studies is to examine general mechanisms behind efficient emergence of social norms while agents interact with each other using basic learning (particularly reinforcement learning) methods. These mechanisms include the social learning strategy [6,7], the collective interaction protocol [11–13], c Springer International Publishing Switzerland 2016 R. Booth and M.-L. Zhang (Eds.): PRICAI 2016, LNAI 9810, pp. 805–818, 2016. DOI: 10.1007/978-3-319-42911-3 68
806
C. Yu et al.
the utilization of topological knowledge [8,9], and the observation capability of agents [10], etc. Although these studies provide us with a deep understanding of efficient mechanisms of norm emergence, they share the same limitation inevitably. That is, learning parameters in these studies are often fine-tuned by hand and thus cannot be adapted according to the varying norm emerging situations. A key question then arises that how agents can adapt their learning behaviors dynamically during the process of norm emergence, and how this kind of adaptiveness can influence the final emergence performance? To this end, this paper provides another perspective in the research of norm emergence by simply focusing on the role of learning itsel
Data Loading...