Some scientific results of Yu. M. Ermoliev and his school in modern stochastic optimization theory
- PDF / 209,203 Bytes
- 19 Pages / 595.276 x 793.701 pts Page_size
- 53 Downloads / 176 Views
CYBERNETICS SOME SCIENTIFIC RESULTS OF YU. M. ERMOLIEV AND HIS SCHOOL IN MODERN STOCHASTIC OPTIMIZATION THEORY P. S. Knopova† and I. V. Sergienkoa‡
UDC 519.21
Abstract. Main results obtained by Ukrainian scientists in stochastic optimization are reviewed. Most attention is paid to the results of Yu. M. Ermoliev and his school in quasigradient stochastic programming methods, which are considered classical. Keywords: stochastic optimization theory, optimal control, quasigradients. INTRODUCTION Since the early years of the V. M. Glushkov Institute of Cybernetics of the NAS of Ukraine, studies in the optimization and optimal control theory, modeling, risk theory, and systems analysis have been of the highest priority. The scientific results in this field obtained at the institute are widely known both in Ukraine and abroad, and schools in the theory of nondifferentiable and discrete optimization, stochastic optimization, risk theory, and optimal control have won international acclaim. In the paper, we will dwell on stochastic optimization as one of the most important fields in optimization theory and will place special emphasis on the results of Yu. M. Ermoliev and his school. The traditional deterministic optimization methods are used for well-determined objective functions and constraints, i.e., for the case where it is possible to exactly calculate f 0 ( x ) being minimized (or maximized) and to test the constraints f i ( x ) £ 0, i = 1: m, for any solution vector x = ( x1 , K , x n ) Î X , where the set X is of “simple” structure (for example, is defined by linear constraints). It is also usually assumed that the gradients or subgradients (for nonsmooth functions) f xi of functions f i , i = 0, 1, K , m, can be calculated. The stochastic quasigradient (SQG) methods developed by Yu. M. Ermoliev [1–5] were used to solve general optimization problems, where it was impossible to calculate f i and f xi exactly. Any deterministic optimization model, for example, min{ f 0 ( x, w )| f i ( x, w ) £ 0, i = 1, K , m; x Î X Í R n }, contains parameters w, which may be random in the general case. This can be due to both the incomplete information about the values of parameters (measurement errors, statistical estimates of the parameters) and the stochastic nature of the parameters (forecasts, weather conditions, deposits, variations in prices, demand, productivity, share prices, etc.). Then for any fixed x, the constraints f i ( x, w ) £ 0 can be violated for some realizations of the random parameter w, and the objective function f 0 ( x, w ) can take different values, and the question is what should be meant by the solution of such a problem. A set of functions f ( x, w ) = { f 0 ( x, w ), f 1 ( x, w ), K , f m ( x, w )}, w ÎW, can be considered as a vector characteristic of the solution x Î X and the problem of choosing the optimal x can be treated as a vector optimization problem with, generally speaking, infinite number of criteria. Instead of w its average value Ew is sometimes taken; a
V. M. Glushkov Institute of Cyber
Data Loading...