A Discussion of Probability Functions and Constraints from a Variational Perspective
- PDF / 737,177 Bytes
- 25 Pages / 439.642 x 666.49 pts Page_size
- 18 Downloads / 160 Views
A Discussion of Probability Functions and Constraints from a Variational Perspective Wim van Ackooij1 Received: 10 March 2020 / Accepted: 14 August 2020 / © Springer Nature B.V. 2020
Abstract Probability constraints are a popular modelling mechanism in applications. They help to model feasible decisions when the latter are taken prior to observing uncertainty and both decisions and uncertainty are involved in a constraint structure of an optimization problem. The popularity of this paradigm is attested by a vast literature using probability constraints. In this work we try to provide, with variational analysis in mind, an introduction to the topic. We wish to highlight questions regarding the understanding of theoretical properties, such as continuity, (generalized) differentiability, convexity, but also regarding algorithms. We try to highlight open research avenues whenever possible. Keywords Probability constraints · Probability functions · Set-Valued analysis · Stochastic optimization Mathematics Subject Classification (2010) 90C15
1 Introduction Handling uncertainty is a key aspect of practical applications. However, once such a statement made, a wide variety of further modelling choices open up. The key choice seems to be if one is interested in modelling recourse, i.e., the possibility for decisions to act on “observed” uncertainty, within the model. Other choices include the number of adaptations, i.e., stages, one wishes to model. It is interesting to think of this as a modelling choice. First, this allows one to adjust the number of stages to balance solvability, model complexity, realism and accuracy. Second, optimization problems are models and therefore by definition wrong, but potentially useful. Reality therefore does not dictate any specific way to write the model, besides being guided by obtaining useful models. Finally, it is rare that iterated decision making is conceived with a specific end in mind. Therefore the whole idea of a finite number of stages was already an approximation anyway. These general observations illustrate the key point: it makes sense, to consider the parts of uncertainty which can not be Wim van Ackooij
[email protected] 1
EDF R & D, OSIRIS 7, Boulevard Gaspard Monge, F-91120, Palaiseau, France
W. van Ackooij
hedged against. Indeed, unforeseeable phenomena will always occur in practice. It may thus be appropriate to not capture full “hedgeability” within the model. What does this imply concretely? It implies that some decisions in the model have to be taken ahead of observing uncertainty. Both the decision x and uncertainty vector ξ can figure in a constraint of the problem, e.g., g(x, ξ ) ≤ 0. We are thus facing in a way a random inequality system. What could it mean to pick a good vector x if the realization of g(x, ξ ) ≤ 0 is only observed after the decision has been implemented? Giving meaning to x being feasible, can be achieved through the use of probability constraints, i.e., requesting that the underlying inequalities hold with probability large enough.
Data Loading...