Negative correlation learning in the extreme learning machine framework
- PDF / 1,042,395 Bytes
- 19 Pages / 595.276 x 790.866 pts Page_size
- 41 Downloads / 226 Views
(0123456789().,-volV)(0123456789(). ,- volV)
ORIGINAL ARTICLE
Negative correlation learning in the extreme learning machine framework Carlos Perales-Gonza´lez1 • Mariano Carbonero-Ruz1 • Javier Pe´rez-Rodrı´guez1 • David Becerra-Alonso1 Francisco Ferna´ndez-Navarro1
•
Received: 9 October 2019 / Accepted: 17 February 2020 Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract Extreme learning machine (ELM) has shown to be a suitable algorithm for classification problems. Several ensemble metaalgorithms have been developed in order to generalize the results of ELM models. Ensemble approaches introduced in the ELM literature mainly come from boosting and bagging frameworks. The generalization of these methods relies on data sampling procedures, under the assumption that training data are heterogeneously enough to set up diverse base learners. The proposed ELM ensemble model overcomes this strong assumption by using the negative correlation learning (NCL) framework. An alternative diversity metric based on the orthogonality of the outputs is proposed. The error function formulation allows us to develop an analytical solution to the parameters of the ELM base learners, which significantly reduce the computational burden of the standard NCL ensemble method. The proposed ensemble method has been validated by an experimental study with a variety of benchmark datasets, comparing it with the existing ensemble methods in ELM. Finally, the proposed method statistically outperforms the comparison ensemble methods in accuracy, also reporting a competitive computational burden (specially if compared to the baseline NCL-inspired method). Keywords Negative correlation learning Extreme learning machine Ensemble Diversity
1 Introduction Over the years, extreme learning machine (ELM) [30] has become a competitive algorithm for both multi-classification and regression problems. It has been extensively used not only on traditional supervised machine learning problems, but also on time series prediction [57, 69], image classification [10] and speech recognition [67]. Both then single-hidden layer feedforward network (SLFN) [31] and the kernel trick version [30] are widely used in supervised machine learning problems due to its powerful nonlinear mapping capability [18]. The neural network version of the ELM framework relies on the randomness of the weights between the input and the hidden layer. This allows a speedy calculation and has shown good classification results. In turn, this opened the door to deep learning and
& Carlos Perales-Gonza´lez [email protected] 1
ensemble methodologies, in order to solve more recent problems [11, 55]. Deep learning and ensembles methodologies are disputing for performance in main supervised machine learning problems, both in multi-classification and regression [54, 66]. Deep learning predictors focus on decomposing features in multi-level representations through hierarchical architectures for the learning tasks and minimizing errors [48]. Deep learning methodologies are c
Data Loading...