Two swarm intelligence approaches for tuning extreme learning machine

  • PDF / 1,693,369 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 3 Downloads / 203 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

Two swarm intelligence approaches for tuning extreme learning machine Abobakr Khalil Alshamiri1 · Alok Singh1 · Bapi Raju Surampudi1,2 

Received: 15 April 2016 / Accepted: 18 January 2017 © Springer-Verlag Berlin Heidelberg 2017

Abstract  Extreme learning machine (ELM) is a new algorithm for training single-hidden layer feedforward neural networks which provides good performance as well as fast learning speed. ELM tends to produce good generalization performance with large number of hidden neurons as the input weights and hidden neurons biases are randomly initialized and remain unchanged during the learning process, and the output weights are analytically determined. In this paper, two swarm intelligence based metaheuristic techniques, viz. Artificial Bee Colony (ABC) and Invasive Weed Optimization (IWO) are proposed for tuning the input weights and hidden biases. The proposed approaches are called ABC-ELM and IWO-ELM in which the input weights and hidden biases are selected using ABC and IWO respectively and the output weights are computed using the Moore-Penrose (MP) generalized inverse. The proposed approaches are tested on different benchmark classification data sets and simulations show that the proposed approaches obtain good generalization performance in comparison to the other techniques available in the literature.

* Alok Singh [email protected] Abobakr Khalil Alshamiri [email protected]

Bapi Raju Surampudi [email protected]; [email protected]

1

School of Computer and Information Sciences, University of Hyderabad, Hyderabad 500 046, India

2

Cognitive Science Lab, International Institute of Information Technology, Hyderabad 500 032, India



Keywords  Artificial bee colony algorithm · Classification · Extreme learning machine · Invasive weed optimization · Swarm intelligence

1 Introduction Extreme learning machine (ELM) is a relatively new learning algorithm, proposed by Huang et al. [1, 2], for training single-hidden layer feedforward neural networks (SLFNs). Several variants and improvements of the original ELM have been proposed and applied in various domains and fields [3–6]. In ELM, the input weights and hidden layer biases are randomly generated and the output weights are computed using Moore-Penrose (MP) generalized inverse [7]. Traditional gradient-descent based algorithms, such as back-propagation (BP), for SLFNs require all the weights and biases of the networks to be tuned iteratively. These algorithms usually get stuck in a local minima and suffer from slow convergence. In contrast, ELM provides good generalization performance and converges very fast. In ELM theory, since the input weights and hidden biases are randomly assigned and remained fixed during the learning process, the number of hidden neurons should be large enough in order to obtain good generalization performance [8]. The suitable number of neurons in ELM hidden layer is still an open problem. Several methods have been proposed to determine the suitable number of ELM hidden neurons. Rong