Fully discrete schemes for monotone optimal control problems
- PDF / 621,359 Bytes
- 19 Pages / 439.37 x 666.142 pts Page_size
- 57 Downloads / 256 Views
Fully discrete schemes for monotone optimal control problems Laura S. Aragone1 · Lisandro A. Parente1 · Eduardo A. Philipp1
Received: 22 March 2016 / Revised: 21 July 2016 / Accepted: 8 September 2016 © SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional 2016
Abstract In this article, we study an infinite horizon optimal control problem with monotone controls. We analyze the associated Hamilton–Jacobi–Bellman (HJB) variational inequality which characterizes the value function and consider the totally discretized problem using Lagrange elements to approximate the state space . The convergence orders of these approximations are proved, which are in general (h + √k )γ where γ is the Hölder constant of the h value function u, h and k are the time and space discretization parameters, respectively. A 2 special election of the relations between h and k allows to obtain a convergence of order k 3 γ , which is valid without semiconcavity hypotheses over the problem’s data. Keywords Monotone optimal control problems · HJB variational inequality · Numerical solutions Mathematics Subject Classification 49J15 · 49M25
1 Introduction We consider a stationary controlled dynamic with infinite horizon. The optimal control problem deals with the minimization of an integral functional where the controls are restrained to
Communicated by Domingo Alberto Tarzia. This work was partially supported by Grant PIP CONICET 286/2012 and PICT ANPCYT 2212/2012.
B
Eduardo A. Philipp [email protected] Laura S. Aragone [email protected] Lisandro A. Parente [email protected]
1
CONICET-CIFASIS-UNR, Av. 27 de Febrero 210 bis, 2000 Rosario, Argentina
123
L. S. Aragone et al.
be non-decreasing. With this subset of controls it is not possible to define an adequate dynamic programming principle (DPP); nevertheless, this inconvenience can be fixed by introducing a new variable a that represents the minimum initial value of the monotone controls (see Bardi et al. 1997; Barron and Jensen 1980). From the DPP, the HJB conditions are deduced and have in this case the form of a variational inequality (see Barron 1985 for the finite horizon case and Bardi et al. 1997 for the infinite horizon case). A different approach to the finite horizon problem case has been made by Hellwig (2009) establishing a Pontryagin maximum principle for a problem in economy: the principal-agent problem. Some other problems in economy have been studied as problems with monotone controls but with a stochastic setting, e.g., the monotone follower problem which is a type of singular control problem (see Karatzas and Shreve 1984 and the recent articles by Ferrari et al. 2016; Chiarolla et al. 2015). In these problems, the aim is to meet a random demand and the controls are non-decreasing processes that represent the (irreversible) cumulative investment. Some other stochastic optimal control problems, which a priori are not constrained to have monotone controls, have the special property that the optimal policies are monoton
Data Loading...