Improved zeroing neural networks for finite time solving nonlinear equations

  • PDF / 1,203,728 Bytes
  • 10 Pages / 595.276 x 790.866 pts Page_size
  • 54 Downloads / 322 Views

DOWNLOAD

REPORT


(0123456789().,-volV)(0123456789().,-volV)

EMERGING TRENDS OF APPLIED NEURAL COMPUTATION - E_TRAINCO

Improved zeroing neural networks for finite time solving nonlinear equations Jie Jin1



Lv Zhao1 • Mu Li1 • Fei Yu2 • Zaifang Xi1

Received: 20 July 2019 / Accepted: 22 November 2019  Springer-Verlag London Ltd., part of Springer Nature 2019

Abstract Nonlinear equation is an important cornerstone of nonlinear science, and many practical problems in scientific and engineering fields can be described by nonlinear equation in mathematics. In this paper, improved zeroing neural network (IZNN) models are presented and investigated for finding the solutions of the time-invariant nonlinear equation (TINE) and time-varying nonlinear equation (TVNE) in predictable and finite time. Compared with the exponential convergence zeroing neural network (ZNN), the convergence time of the IZNN models is finite and able to be estimated; in addition, the IZNN model is more stable and reliable for solving high-order TVNE. Both of the theoretical and numerical simulation results of the ZNN and IZNN for finding the solutions of the TINE and TVNE are presented to demonstrate the superiority and effectiveness of the IZNN model. Keywords Finite-time convergence  Zeroing neural network (ZNN)  Time-invariant nonlinear equation (TINE)  Time-varying nonlinear equation (TVNE)  Improved zeroing neural network (IZNN)

1 Introduction The innate characters of most phenomena in the natural world are nonlinear, and nonlinear science has become not only the core of modern science, but also the frontier of modern scientific research. Since most nonlinear phenomena are described by nonlinear equations, solving nonlinear equations plays a crucial role in understanding the real world. Thus, it has aroused great interests of scientists and engineers [1–6]. In the past decades, iterative methods have been commonly used for solving nonlinear equations, and the Newton iterative is of one the most effective methods, which converges to the theoretical roots of the nonlinear equations quadratically [7–11]. In order to improve the convergence speed of the Newton iterative for solving

& Jie Jin [email protected] 1

School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan 411201, China

2

College of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China

nonlinear equations, many improved Newton-like iterations have been reported [12–17]. In recent years, the neural networks develop rapidly, and various novel neural dynamics have been presented. Compared with the existing numerical Newton or Newtonlike iterations for solving nonlinear equations, the neural dynamics have the intrinsic advantages of parallel processing, self-adaptation and easier hardware implementation, and the neural dynamic method becomes one of the most effective computational tools for solving the nonlinear equations. As an important kind of recurrent neural network, the gradient-base