Analysis of Hannan consistent selection for Monte Carlo tree search in simultaneous move games
- PDF / 1,494,045 Bytes
- 50 Pages / 439.37 x 666.142 pts Page_size
- 42 Downloads / 160 Views
Analysis of Hannan consistent selection for Monte Carlo tree search in simultaneous move games Vojtech ˇ Kovaˇrík1
· Viliam Lisý1
Received: 21 August 2017 / Revised: 7 July 2019 / Accepted: 17 July 2019 © The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2019
Abstract Hannan consistency, or no external regret, is a key concept for learning in games. An action selection algorithm is Hannan consistent (HC) if its performance is eventually as good as selecting the best fixed action in hindsight. If both players in a zero-sum normal form game use a Hannan consistent algorithm, their average behavior converges to a Nash equilibrium of the game. A similar result is known about extensive form games, but the played strategies need to be Hannan consistent with respect to the counterfactual values, which are often difficult to obtain. We study zero-sum extensive form games with simultaneous moves, but otherwise perfect information. These games generalize normal form games and they are a special case of extensive form games. We study whether applying HC algorithms in each decision point of these games directly to the observed payoffs leads to convergence to a Nash equilibrium. This learning process corresponds to a class of Monte Carlo Tree Search algorithms, which are popular for playing simultaneous-move games but do not have any known performance guarantees. We show that using HC algorithms directly on the observed payoffs is not sufficient to guarantee the convergence. With an additional averaging over joint actions, the convergence is guaranteed, but empirically slower. We further define an additional property of HC algorithms, which is sufficient to guarantee the convergence without the averaging and we empirically show that commonly used HC algorithms have this property. Keywords Nash equilibrium · Extensive form games · Simultaneous moves · Zero sum · Hannan consistency
Editor: Prasad Tadepalli.
B
Vojtˇech Kovaˇrík [email protected] Viliam Lisý [email protected]
1
Artificial Intelligence Center, Department of Computer Science, Faculty of Electrical Engineering, Czech Technical University in Prague, Zikova 1903/4, 166 36 Prague 6, Czech Republic
123
Machine Learning
1 Introduction Research on learning in games led to multiple algorithmic advancements, such as the variants of the counterfactual regret minimization algorithm (Zinkevich et al. 2007), which allowed for achieving human performance in poker (Moravˇcík et al. 2017; Brown and Sandholm 2018). Learning in games has been extensively studied in the context of normal form games, extensive form games, as well as Markov games (Fudenberg et al. 1998; Littman 1994). One of the key concepts in learning in games is Hannan consistency (Hannan 1957; Hart and Mas-Colell 2000; Cesa-Bianchi and Lugosi 2006), also known as no external regret. An algorithm for repetitively selecting actions from a fixed set of options is Hannan consistent (HC), if its performance approaches the performance of se
Data Loading...