Region Stability and Stabilization of Recurrent Neural Network with Parameter Disturbances
- PDF / 621,593 Bytes
- 14 Pages / 439.37 x 666.142 pts Page_size
- 55 Downloads / 203 Views
Region Stability and Stabilization of Recurrent Neural Network with Parameter Disturbances Gang Bao1 · Yue Peng1 · Xue Zhou1 · Shunqi Gong1 Accepted: 31 August 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract This paper mainly focuses on global region stability and stabilization analysis for recurrent neural networks with certain or uncertain parameter disturbances. Firstly, it presents global region stability results for recurrent neural networks with certain parameter disturbances by state partition and mathematical analysis methods. Next, it designs one adaptive controller to stabilize network states to the desired region for recurrent neural networks with uncertain parameter disturbances. At last, it gives two numerical examples for verifying obtained results. Keywords Region stability · Stabilization · Recurrent neural networks · Adaptive control
1 Introduction Global stability of recurrent neural networks (RNNs) has been the research focus for several decades years because network stability is the foundation for applications of RNNs. Nowadays, there are so many compound results. For example, Zeng et al. [1] derived the global stability criteria of a general kind of discrete-time RNNs by using the induction principle and anti-proof method. Qin and Xue [2] derived global finite time exponential stability and convergence conditions for RNNs with discontinuous activation functions. Cao et al. [3] introduced multiple and discrete delays into the RNNs model and discussed global asymptotical stability of RNNs. Zhang and Wang [4] gave a global asymptotic stability criterion for delayed cellular neural networks by linear matrix inequality and the interior-point algorithm. Considering unbounded time-varying delays, Zeng et al. [5] derived some sufficient conditions for global asymptotic stability and exponential stability of RNNs. Wang et al. [6] unified several time delays as one model and presented global asymptotic stability criteria for such neural networks. Forti et al. [7] obtained sufficient conditions for global exponential stability and global finite time convergence of RNNs with constant delays and discontinuous activation functions by differential inclusion theory. Ge et al. [8] proposed the delay-decomposition method and
B 1
Gang Bao [email protected] Hubei Key Laboratory of Cascaded Hydropower Stations Operation and Control, China Three Gorges University, Yichang 443002, China
123
G. Bao et al.
derived delay-dependent stability conditions for RNNs with time-varying delay. Chen and Zheng [9] obtained global asymptotic stability criteria for RNNs with distributed delays by M-matrix theory. Without constructing Lyapunov functions, Zeng et al. [10] obtained some global exponential stability of RNNs by the comparison principle and estimate the exponential convergence rates by the obtained results. Hu et al. [11] gave some global exponential stability criteria for delayed RNNs with impulsive effects by the homeomorphism theory. Senan [12] introduced the Takagi–Suge
Data Loading...