Adaptive online sequential extreme learning machine for dynamic modeling

  • PDF / 1,507,767 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 52 Downloads / 230 Views

DOWNLOAD

REPORT


METHODOLOGIES AND APPLICATION

Adaptive online sequential extreme learning machine for dynamic modeling Jie Zhang1 · Yanjiao Li1 · Wendong Xiao1

© Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Extreme learning machine (ELM) is an emerging machine learning algorithm for training single-hidden-layer feedforward networks (SLFNs). The salient features of ELM are that its hidden layer parameters can be generated randomly, and only the corresponding output weights are determined analytically through the least-square manner, so it is easier to be implemented with faster learning speed and better generalization performance. As the online version of ELM, online sequential ELM (OSELM) can deal with the sequentially coming data one by one or chunk by chunk with fixed or varying chunk size. However, OS-ELM cannot function well in dealing with dynamic modeling problems due to the data saturation problem. In order to tackle this issue, in this paper, we propose a novel OS-ELM, named adaptive OS-ELM (AOS-ELM), for enhancing the generalization performance and dynamic tracking capability of OS-ELM for modeling problems in nonstationary environments. The proposed AOS-ELM can efficiently reduce the negative effects of the data saturation problem, in which approximate linear dependence (ALD) and a modified hybrid forgetting mechanism (HFM) are adopted to filter the useless new data and alleviate the impacts of the outdated data, respectively. The performance of AOS-ELM is verified using selected benchmark datasets and a real-world application, i.e., device-free localization (DFL), by comparing it with classic ELM, OS-ELM, FOS-ELM, and DU-OS-ELM. Experimental results demonstrate that AOS-ELM can achieve better performance. Keywords Extreme learning machine · Online sequential extreme learning machine · Data saturation problem · Approximate linear dependence · Hybrid forgetting mechanism

1 Introduction In the past decades, single-hidden-layer feedforward neural networks (SLFNs) have received much attention and have been applied to many fields, because they can theoretically approximate any target function and form the decision boundary with arbitrary shape (Huang et al. 2000; Hirnik 1991). However, most of the traditional gradient-based training approaches for SLFNs are time-consuming due to the iterative mechanism and easiness to trap in the local miniCommunicated by V. Loia.

B

Wendong Xiao [email protected] Jie Zhang [email protected] Yanjiao Li [email protected]

1

School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China

mum. In order to address the aforementioned issues, Huang et al. (2006) proposed extreme learning machine (ELM) as an extension of the traditional SFLNs training approaches. Its salient features are that the hidden layer parameters can be generated randomly without iterative tuning, and the output weights can be determined analytically in the least-square manner. Accordingly, ELM is easier to be implemented with fast