Optimal sampled-data controls with running inequality state constraints: Pontryagin maximum principle and bouncing traje

  • PDF / 1,077,098 Bytes
  • 45 Pages / 439.37 x 666.142 pts Page_size
  • 37 Downloads / 183 Views

DOWNLOAD

REPORT


Series A

Optimal sampled-data controls with running inequality state constraints: Pontryagin maximum principle and bouncing trajectory phenomenon Loïc Bourdin1 · Gaurav Dhar1 Received: 2 July 2019 / Accepted: 25 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2020

Abstract In the present paper we derive a Pontryagin maximum principle for general nonlinear optimal sampled-data control problems in the presence of running inequality state constraints. We obtain, in particular, a nonpositive averaged Hamiltonian gradient condition associated with an adjoint vector being a function of bounded variation. As a well known challenge, theoretical and numerical difficulties may arise due to the possible pathological behavior of the adjoint vector (jumps and singular part lying on parts of the optimal trajectory in contact with the boundary of the restricted state space). However, in our case with sampled-data controls, we prove that, under certain general hypotheses, the optimal trajectory activates the running inequality state constraints at most at the sampling times. Due to this so-called bouncing trajectory phenomenon, the adjoint vector experiences jumps at most at the sampling times (and thus in a finite number and at precise instants) and its singular part vanishes. Taking advantage of these informations, we are able to implement an indirect numerical method which we use to solve three simple examples. Keywords Optimal control · Sampled-data control · Pontryagin maximum principle · State constraints · Ekeland variational principle · Indirect numerical method · Shooting method Mathematics Subject Classification 34H05 · 49M05 · 93C10 · 93C57

B

Gaurav Dhar [email protected] Loïc Bourdin [email protected]

1

Institut de recherche XLIM, UMR CNRS 7252, Université de Limoges, Limoges, France

123

L. Bourdin, G. Dhar

1 Introduction In mathematics a dynamical system describes the evolution of a point (usually called the state of the system) in an appropriate space (called the state space) following an evolution rule (called the dynamics of the system). Dynamical systems are of many different natures (continuous versus discrete systems, deterministic versus stochastic systems, etc.). A continuous system is a dynamical system in which the state evolves in a continuous way in time (for instance, ordinary differential equations, evolution partial differential equations, etc.), while a discrete system is a dynamical system in which the state evolves in a discrete way in time (for instance, difference equations, quantum differential equations, etc.). A control system is a dynamical system in which a control parameter intervenes in the dynamics and thus influences the evolution of the state. Finally an optimal control problem consists in determining a control which allows to steer the state of a control system from an initial condition to some desired target while minimizing a given cost and satisfying some constraints. Context in optimal control theory. Est