On the Asymptotic Nature of First Order Mean Field Games

  • PDF / 512,773 Bytes
  • 31 Pages / 439.37 x 666.142 pts Page_size
  • 15 Downloads / 197 Views

DOWNLOAD

REPORT


On the Asymptotic Nature of First Order Mean Field Games Markus Fischer1

· Francisco J. Silva2

© The Author(s) 2020

Abstract For a class of finite horizon first order mean field games and associated N -player games, we give a simple proof of convergence of symmetric N -player Nash equilibria in distributed open-loop strategies to solutions of the mean field game in Lagrangian form. Lagrangian solutions are then connected with those determined by the usual mean field game system of two coupled first order PDEs, and convergence of Nash equilibria in distributed Markov strategies is established. Keywords Mean field games · Lagrangian form · Deterministic dynamics · Nash equilibrium · Distributed strategies Mathematics Subject Classification 49N70 · 60B10 · 91A06 · 91A13

1 Introduction The purpose of this article is to illustrate a simple way of establishing convergence of open-loop Nash equilibria in the case of first-order non-stationary Mean Field Games (MFGs). Introduced by Lasry and Lions and, independently, by Huang, Malhamé and Caines about fifteen years ago (cf. [33,36]), mean field games are limit models for non-cooperative symmetric N -player differential games as the number of players N tends to infinity; see, for instance, the lecture notes [13] and the recent two-volume work [18]. The notion of solution usually adopted for the prelimit models is that of a Nash equilibrium. A standard way of making the connection with the limit model

B

Markus Fischer [email protected] Francisco J. Silva [email protected]

1

Dipartimento di Matematica “Tullio Levi-Civita”, Università degli Studi di Padova, via Trieste, 63, 35121 Padova, Italia

2

Institut de recherche XLIM-DMI, UMR-CNRS 7252, Faculté des Sciences et Techniques, Université de Limoges, 87060 Limoges, France

123

Applied Mathematics & Optimization

rigorous is to show that a solution of the mean field game yields approximate Nash equilibria for the N -player games, with approximation error vanishing as N → ∞. In the opposite direction, one aims to prove that a sequence of N -player Nash equilibria converges, as N tends to infinity, to the mean field game limit. When Nash equilibria are considered in stochastic open-loop strategies, then their convergence is well understood and can be established under mild conditions; see [28] and [34], both for finite horizon games with general, possibly degenerate, Brownian dynamics. The convergence analysis is much harder when Nash equilibria are defined over Markov feedback strategies with full state information. A first result in this setting was given by Gomes, Mohr, and Souza [29] for continuous time games with finite state space. There, convergence of Markovian Nash equilibria is proved, but only if the time horizon is small enough. A breakthrough was achieved by Cardaliaguet, Delarue, Lasry, and Lions in [15]. In the setting of games with non-degenerate Brownian dynamics, possibly including common noise, convergence to the mean field game limit is established there for arbitrary time horizon provi