Conditioning optimization of extreme learning machine by multitask beetle antennae swarm algorithm

  • PDF / 2,815,801 Bytes
  • 14 Pages / 595.276 x 790.866 pts Page_size
  • 29 Downloads / 195 Views

DOWNLOAD

REPORT


REGULAR RESEARCH PAPER

Conditioning optimization of extreme learning machine by multitask beetle antennae swarm algorithm Xixian Zhang1 · Zhijing Yang1   · Faxian Cao1 · Jiangzhong Cao1 · Meilin Wang1 · Nian Cai1 Received: 26 January 2019 / Accepted: 30 April 2020 / Published online: 16 May 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Extreme learning machine (ELM) as a simple and rapid neural network has been shown its good performance in various areas. Different from the general single hidden layer feedforward neural network (SLFN), the input weights and biases in hidden layer of ELM are generated randomly, so that it only takes a little computational overhead to train the model. However, the strategy of selecting input weights and biases at random may result in ill-conditioned problems. Aiming to optimize the conditioning of ELM, we propose an effective particle swarm heuristic algorithm called Multitask Beetle Antennae Swarm Algorithm (MBAS), which is inspired by the structures of artificial bee colony (ABC) algorithm and Beetle Antennae Search (BAS) algorithm. Then, the proposed MBAS is applied for optimizing the input weights and biases of ELM to solve its ill-conditioned problems. Experiment results show that the proposed method is capable of simultaneously reducing the condition number and regression error, and achieving good generalization performance. Keywords  Extreme learning machine (ELM) · Conditioning optimization · Beetle antennae search (BAS) · Heuristic algorithm

1 Introduction Extreme learning machine (ELM) proposed by Huang et al. [1], is a feasible single hidden layer feedforward network (SLFN). It is composed of three core components: input layer, hidden layer and output layer. It has been successfully applied for many research and engineering problems. As a flexible and fast SLFN, the input weights and biases in hidden layer of ELM are assigned randomly, making the training speed increase greatly and process huge size of data in short time. There is no doubt that ELM is a good choice to cope with the tasks requiring instantaneity. Wong et al. utilizes ELM to detect the real-time fault signal of gas turbine generator system [2]. Xu et al. [3, 4] proposed a predictor based on ELM model for immediate assessment of electrical power system. Meanwhile, it has been proved that the performance of ELM and its variants are superior to most classical machine learning methods in the field of image

* Zhijing Yang [email protected] 1



School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China

processing [5–7], speech recognition [8–10], biomedical sciences [11–13], and so on. There are plenty of efforts have been laid emphasis on enhancing the accuracy of ELM by means of adjusting neural network architecture and changing the numbers of hidden layer by certain rules. An incremental constructive ELM, adding its hidden nodes on the basic of convex optimization means and neural network theory, was proposed by Huang et al. [14]. While Rong e