Some Stochastic Gradient Algorithms for Hammerstein Systems with Piecewise Linearity
- PDF / 636,081 Bytes
- 17 Pages / 439.37 x 666.142 pts Page_size
- 48 Downloads / 178 Views
Some Stochastic Gradient Algorithms for Hammerstein Systems with Piecewise Linearity Yan Pu1,2 · Yongqing Yang1
· Jing Chen1
Received: 4 March 2020 / Revised: 15 September 2020 / Accepted: 18 September 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Some stochastic gradient (SG) algorithms for Hammerstein systems with piecewise linearity are developed in this paper. Due to the complexity of the nonlinear structure, the key term separation is used to transfer the nonlinear model into a regression model, and then, some SG algorithms are proposed for this model. Since the SG algorithm has slow convergence rate, a forgetting factor SG algorithm and an Aitken SG algorithm are provided. Compared with the forgetting factor SG algorithm, the Aitken SG algorithm has smaller variance of estimation error, which means the Aitken SG algorithm is more effective. Two simulation examples are provided to show the effectiveness of the proposed algorithms. Keywords Piecewise linearity · Parameter estimation · Aitken method · Gradient search · Forgetting factor
1 Introduction Parameter estimation plays an important role in controller design [13,14] because the controller design of dynamic system is usually established on the premise that the parameters of dynamic systems are known [19,28]. Compared with the linear systems, nonlinear systems are more extensive in engineering practice [6,10], and they can be roughly divided into four categories: Hammerstein systems [36], Wiener sys-
B
Yongqing Yang [email protected] Yan Pu [email protected] Jing Chen [email protected]
1
School of Science, Jiangnan University, Wuxi 214122, People’s Republic of China
2
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, People’s Republic of China
Circuits, Systems, and Signal Processing
tems [18], Hammerstein–Wiener systems and Wiener–Hammerstein systems [2,35]. Recently, many identification algorithms have been developed for these nonlinear systems, such as the stochastic gradient (SG) algorithms [39], the expectation maximization algorithms and the iterative algorithms [4]. The SG algorithm updates the parameter estimates only through the latest input–output data at each sampling instant, and it does not need to compute the inverse matrix; thus, it has less computational loads. Its variants include the multi-innovation stochastic gradient algorithms [16,42] and the gradient-based iterative algorithms [12]. The idea of the gradient-based identification algorithms is first to determine the search direction and then to calculate the step size for each sampling instant. Although the computational effort of the SG algorithm is small, its convergence rate is slow because of its zigzag search directions. In general, there are two methods to improve the convergence rates. One is to obtain the optimal direction at each sampling instant. For example, for control problems with undetermined final time, Hussu provided a conjugate-gradient method [20]. The other method is to compute
Data Loading...