Asymptotic Behavior of a Modified Stochastic Optimization Procedure in an Averaging Scheme
- PDF / 135,946 Bytes
- 9 Pages / 594 x 792 pts Page_size
- 12 Downloads / 182 Views
ASYMPTOTIC BEHAVIOR OF A MODIFIED STOCHASTIC OPTIMIZATION PROCEDURE IN AN AVERAGING SCHEME P. P. Gorun1 and Y. M. Chabanyuk 2
UDC 519.21
Abstract. The asymptotic behavior of a modified discrete stochastic optimization procedure (SOP) in a Markov environment in an averaging scheme is investigated. Additional SOP optimization parameters are introduced and are used to investigate the behavior of fluctuations on increasing time intervals. The normalization-dependent form of the limit generator is established, and a modified discrete SOP is shown to be asymptotically normal when the introduced parameters assume certain values. Keywords: Markov process, stochastic optimization, asymptotic behavior, averaging scheme.
The investigation of the behavior of fluctuations of the classical stochastic optimization procedure (SOP) determines an estimate for the rate of its convergence to the extreme point of an averaged evolutionary system. This problem arises in using the algorithm of phase averaging of random evolutions [1] that is based on the closeness of the initial and averaged evolutionary systems [2]. In particular, the behavior of fluctuations of a diffusive evolutionary system with Markov jumps (a stochastic approximation procedure) is investigated in [3] where the rate function has a singularly excited addend with a small series parameter. The asymptotic behavior of a stochastic optimization procedure was investigated by the moments method described in more detail in [4, 5], and, for more general cases, other limit distributions are obtained [6–8]. In [9], some randomized algorithms of stochastic optimization are considered under almost random disturbances when trial perturbations are applied to the input. Two of them are randomized versions of the Kiefer–Wolfowitz procedure (each of them requires two computations of an unknown function at every iteration step). The mentioned paper includes a brief review of works of H. Kushner and D. Clark, B. T. Polyak and A. B. Tsybakov, J. Spall, H. F. Chen, and others. Each author considered the mentioned procedures that supplemented one another under different conditions. Since the asymptotic analysis of the classical SOP was made by the authors of this article in [10], this article makes the asymptotic analysis of the modified procedure described in [9, procedure (2)] taking into account the aforesaid and with a view to comparing the asymptotic behavior of the classical and modified procedures. For simplicity, the one-dimensional case of regression functions is considered, but the obtained results are extended by analogy to the multidimensional case. The pseudo-gradient and also the SOP for a regression function C ( u; x ) Î C 3 ( Ñ), x Î X , are considered in the following modified representation: C ( u + b; x ) - C ( u; x ) (1) , u = u( t ), b = b( t ). b The jump SOP in a scheme of series in a Markov environment is defined by the relationship (let Ñ b C ( u; x ) =
-1
å a ne Ñ b C ( u ne ; xne ) = 0)
[11]
n= 0
1
National university “Lviv Polytechnic,” Lviv, Ukraine, Pa
Data Loading...