On optimization based extreme learning machine in primal for regression and classification by functional iterative metho

  • PDF / 3,831,574 Bytes
  • 22 Pages / 595.276 x 790.866 pts Page_size
  • 94 Downloads / 228 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

On optimization based extreme learning machine in primal for regression and classification by functional iterative method S. Balasundaram • Deepak Gupta

Received: 17 November 2013 / Accepted: 26 June 2014  Springer-Verlag Berlin Heidelberg 2014

Abstract In this paper, the recently proposed extreme learning machine in the aspect of optimization method by Huang et al. (Neurocomputing, 74: 155–163, 2010) has been considered in its primal form whose solution is obtained by solving an absolute value equation problem by a simple, functional iterative algorithm. It has been proved under sufficient conditions that the algorithm converges linearly. The pseudo codes of the algorithm for regression and classification are given and they can be easily implemented in MATLAB. Experiments were performed on a number of real-world datasets using additive and radial basis function hidden nodes. Similar or better generalization performance of the proposed method in comparison to support vector machine (SVM), extreme learning machine (ELM), optimally pruned extreme learning machine (OP-ELM) and optimization based extreme learning machine (OB-ELM) methods with faster learning speed than SVM and OB-ELM demonstrates its effectiveness and usefulness. Keywords Extreme learning machine  Single hidden layer feedforward neural networks  Functional iterative method  Support vector machine

S. Balasundaram (&)  D. Gupta School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi 110067, India e-mail: [email protected]; [email protected] D. Gupta e-mail: [email protected]

1 Introduction Recently, a new learning algorithm for single hidden layer feedforward neural networks (SLFNs) architecture called extreme learning machine (ELM) method has been proposed in [21] to overcome many of the problems of traditional feedforward neural network learning algorithms such as the presence of local minima, imprecise learning rate, over fitting and slow rate of convergence. Once the input weights and hidden layer biases have been chosen randomly, ELM determines the unknown output weight vector of the network having the smallest norm by solving a system of linear equations. ELM is a simple unified algorithm for regression, binary and multiclass problems and it has been successfully tested on benchmark problems of practical importance. It was initially proposed for SLFNs and later extended to ‘‘generalized’’ SLFNs which may not neuron alike [15, 16]. The essence of ELM is that there is no need of tuning the hidden layer of SLFNs. The growing popularity of ELM [3, 4, 10, 14, 22, 27, 28, 33, 34, 38, 39, 42] is because of its better generalization performance with much faster learning speed in comparison to traditional computational intelligence techniques [19]. The main problem with ELM is the stochastic nature of the hidden layer output matrix which in practice may lower its learning accuracy [6]. Further, it was observed that to achieve an acceptable level of performance a large number of hidden nodes might be