Approximate Dynamic Programming for Generation of Robustly Stable Feedback Controllers
In this paper, we present a technique for approximate robust dynamic programming that allows to generate feedback controllers with guaranteed stability, even for worst case disturbances. Our approach is closely related to robust variants of the Model Pred
- PDF / 415,822 Bytes
- 17 Pages / 439.324 x 666.21 pts Page_size
- 108 Downloads / 194 Views
Center of Mathematical Sciences, University of Cambridge, Wilberforce Road, CB3 OWB Cambridge, United Kingdom Optimization in Engineering Center (OPTEC)/Electrical Engineering Department (ESAT), K.U.Leuven, Kasteelpark Arenberg 10, 3001 Leuven-Heverlee, Belgium [email protected]
Abstract In this paper, we present a technique for approximate robust dynamic programming that allows to generate feedback controllers with guaranteed stability, even for worst case disturbances. Our approach is closely related to robust variants of the Model Predictive Control (MPC), and is suitable for linearly constrained polytopic systems with piecewise affine cost functions. The approximation method uses polyhedral representations of the cost-to-go function and feasible set, and can considerably reduce the computational burden compared to recently proposed methods for exact dynamic programming for robust MPC [1, 8]. In this paper, we derive novel conditions for guaranteeing closed loop stability that are based on the concept of a “uroborus”. We finish by applying the method to a state constrained tutorial example, a parking car with uncertain mass.
1 Introduction The optimization based feedback control technique of Model Predictive Control (MPC) has attracted much attention in the last two decades and is nowadays widespread in industry, with many thousand large scale applications reported, in particular in the process industries [19]. Its idea is, simply speaking, to use a model of a real plant to predict and optimize its future behaviour on a so called prediction horizon, in order to obtain an optimal plan of future control actions. Of this plan, only the first step is realized at the real plant for one sampling time, and afterwards the real system state – which might be different than predicted – is observed again, and a new prediction and optimization is performed to generate the next sampling time’s feedback control. So far, in nearly all MPC applications, a deterministic – or
70
J. Bj¨ ornberg and M. Diehl
nominal – model is used for prediction and optimization. Though the reason for repeated online optimization and feedback is exactly the non-deterministic nature of the process, this “nominal MPC” approach is nevertheless useful in practice due to its inherent robustness properties [9, 10]. In contrast to this, robust MPC, originally proposed by Witsenhausen [21], is directly based on a worst-case optimization of future system behaviour. While a key assumption in nominal MPC is that the system is deterministic and known, in robust MPC the system is not assumed to be known exactly, and the optimization is performed against the worst-case predicted system behaviour. Robust MPC thus typically leads to min-max optimization problems, which either arise from an open loop, or from a closed loop formulation of the optimal control problem [14]. In this paper, we are concerned with the less conservative, but computationally more demanding closed loop formulation. We regard discrete-time polytopic systems with piecewise affine cos
Data Loading...