Event-based optimization approach for solving stochastic decision problems with probabilistic constraint
- PDF / 960,863 Bytes
- 22 Pages / 439.37 x 666.142 pts Page_size
- 6 Downloads / 178 Views
Event-based optimization approach for solving stochastic decision problems with probabilistic constraint Xiaonong Lu1,2 · Zhanglin Peng1,2 · Qiang Zhang1,2 · Shanlin Yang1,2 Received: 21 December 2017 / Accepted: 6 February 2019 © Springer-Verlag GmbH Germany, part of Springer Nature 2019
Abstract In many practical control systems or management systems, the manager of systems may allow that the statistic probability of system error or parameter deviation occurs within a certain range. The problem of decision optimization under probabilistic constraint is thus an issue needs to be addressed urgently. In this paper, we consider to develop an event-based approach which can solve the probabilistic constrained decision problems in discrete events dynamic systems. The framework of the event-based optimization is first introduced, and then with the methodology of the performance sensitivity analysis, we present an online event-based policy iteration algorithm based on the derived performance gradient formula. We apply the event-based idea and propose the concept of “risk state”, “risk event” and “risk index” which can be used to better describe the nature of the probabilistic constrained problem. Furthermore, by taking the Lagrangian approach, the constrained decision problem can be solved with two steps. Finally, numerical experiments are designed to verify the efficiency of the proposed method. Keywords Event-based optimization · Probabilistic constraint · Performance gradient · Policy iteration · Risk · Lagrangian method
B
Zhanglin Peng [email protected] Xiaonong Lu [email protected] Qiang Zhang [email protected] Shanlin Yang [email protected]
1
School of Management, Hefei University of Technology, Hefei 230009, China
2
Key Laboratory of Process Optimization and Intelligent Decision-Making, Ministry of Education, Hefei 230009, China
123
X. Lu et al.
1 Introduction In this paper, we consider to solve the stochastic sequential decision problem with probabilistic constraint in discrete events dynamic systems (DEDS). For solving such decision problems, the method based on Markov decision process (MDP) has been deeply studied in recent years, the algorithms used to solve the optimal control policy are also developed and perfected continuously. However, for many practical systems, the state space of MDP is very large, because that the state space usually grows exponentially when the size of system state increases linearly. The standard methods known as the stochastic dynamic programming algorithms based on value iteration and policy iteration [1] often have high computational complexity. In order to deal with the shortcomings existing in the traditional policy optimization methods, many approximation methods for reducing the complexity of the algorithms are proposed, such as the methodology of state clustering [2,3], time clustering [4,5], approximate dynamic programming [6] and reinforcement learning [7], etc. Many applications based on these methods have also been widely studied recently [8–10]. Alt
Data Loading...