A LSTM based prediction model for nonlinear dynamical systems with chaotic itinerancy

  • PDF / 2,537,348 Bytes
  • 12 Pages / 595.276 x 790.866 pts Page_size
  • 84 Downloads / 173 Views

DOWNLOAD

REPORT


A LSTM based prediction model for nonlinear dynamical systems with chaotic itinerancy Yanwen Xue1 · Jun Jiang1 · Ling Hong1 Received: 2 July 2020 / Revised: 21 July 2020 / Accepted: 23 July 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract The prediction for chaotic trajectory from the measured data of time history, without prior knowledge of underlying dynamical model, is a challenging task in the data-driven analysis, due to its sensitivity to initial conditions. In this paper, the Long Short-Term Memory Network (LSTM) with the merge layer is proposed to predict the future states of the coupled MorrisLecar (M-L) system with the chaotic itinerancy responses. Here, the two LSTM models with single-branch and multi-branch are constructed respectively to carry out the predictions in the multivariate loading conditions. By comparison to the network model with single-branch, the multi-branch model with adding merge layer can provide a high utilization of weights to reduce training cost greatly and receive a low prediction error, which make the multi-layer LSTM promising to estimate a high-dimensional complex dynamical behavior like transient chaotic itinerancy. Keywords  Nonlinear dynamical systems · Chaotic itinerancy · Time series prediction · Multivariate loading conditions

1 Introduction The complicated dynamical behavior of the system that is repeatedly and sequentially visited to differently quasi-stable attractors in the lower dimension along the chaotic trajectory of the higher dimensions is called “chaotic itinerancy” [1]. When a prediction model is built based on the data from this kind of systems, it is more difficult because of their highdimensional complex nonlinear characteristics. However, with the development of the neural network, it provides us with a new way of modeling, which greatly simplifies the difficulty of modeling and reduces the sensitivity to nonlinear characteristics with no need of the complicated theoretical analysis. The traditional neural network does not have the ability of “continuous thinking”, and there is no structure in the network that can establish the connection between the continuously input data, so the information cannot be transferred between the input data. The Recurrent Neural Network (RNN) adds a fully connected neural network layer called “hidden layer” in the ordinary neural unit, and expands the * Jun Jiang [email protected] 1



State Key Laboratory for Strength and Vibration, Xi’an Jiaotong University, Xi’an 710049, China

temporal relevance of the input circular unit into the horizontal connection between the hidden layer units [2]. In this way, the transfer of information between input data is realized and the unit has memory capacity. In short, the RNN is to copy a single neural network structure side by side to form a network connection layer [3]. After learning the weight parameters between input and output through the connection layer, the state of neural network unit can be obtained, and the information transfer between in