Deterministic mean field games with control on the acceleration

  • PDF / 615,244 Bytes
  • 32 Pages / 439.37 x 666.142 pts Page_size
  • 70 Downloads / 185 Views

DOWNLOAD

REPORT


Nonlinear Differential Equations and Applications NoDEA

Deterministic mean field games with control on the acceleration Yves Achdou, Paola Mannucci , Claudio Marchi and Nicoletta Tchou Abstract. In the present work, we study deterministic mean field games (MFGs) with finite time horizon in which the dynamics of a generic agent is controlled by the acceleration. They are described by a system of PDEs coupling a continuity equation for the density of the distribution of states (forward in time) and a Hamilton–Jacobi equation for the optimal value of a representative agent (backward in time). The state variable is the pair (x, v) ∈ RN × RN where x stands for the position and v stands for the velocity. The dynamics is often referred to as the double integrator. In this case, the Hamiltonian of the system is neither strictly convex nor coercive, hence the available results on MFGs cannot be applied. Moreover, we will assume that the Hamiltonian is unbounded w.r.t. the velocity variable v. We prove the existence of a weak solution of the MFG system via a vanishing viscosity method and we characterize the distribution of states as the image of the initial distribution by the flow associated with the optimal control. Mathematics Subject Classification. 35F50, 35Q91, 49K20, 49L25. Keywords. Mean field games, First order Hamilton–Jacobi equations, Double integrator, Non-coercive Hamiltonian.

1. Introduction The theory of mean field games (MFGs for short) is more and more investigated since the pioneering works [24–26] of Lasry and Lions: it aims at studying the asymptotic behaviour of differential games (Nash equilibria) as the number of agents tends to infinity. In the present work, we study deterministic mean field games with finite time horizon in which the dynamics of a generic agent is controlled by the acceleration. They are described by a system of PDEs coupling a continuity equation for the density of the distribution of states (forward in time) and a Hamilton–Jacobi (HJ) equation for the optimal value 0123456789().: V,-vol

33

Page 2 of 32

Y. Achdou et al.

NoDEA

of a representative agent (backward in time). The state variable is the pair (x, v) ∈ RN × RN where x stands for the position and v stands for the velocity. The systems of PDEs are of the form ⎧ ⎨ (i) −∂t u − v · Dx u + H(x, v, Dv u) − F [m(t)](x, v) = 0 (ii) ∂t m + v · Dx m − divv (Dpv H(x, v, Dv u)m) = 0 ⎩ (iii) m(x, v, 0) = m0 (x, v), u(x, v, T ) = G[m(T )](x, v) ,

in R2N × (0, T ) in R2N × (0, T ) on R2N

(1.1) where T is a positive real number, u = u(x, v, t), m = m(x, v, t), t ∈ (0, T ) and H is defined by H(x, v, pv ) = max (−α · pv − l(x, v, α)). α·∈RN

(1.2)

We take F and G strongly regularizing and we assume that the running cost has the form l(x, v, α) = l(x, v) + 12 |α|2 + 12 |v|2 , where (x, v) → l(x, v) is a bounded and C 2 -bounded function. Formally, systems of this form arise when the dynamics of the generic player is described by a double integrator: ⎧  ξ (s) = η(s), s ∈ (t, T ), ⎪ ⎪ ⎨  η (s) = α(s), s ∈ (t, T ), (1.3) ξ(t) = x, ⎪ ⎪ ⎩ η(t) =