Fuzzy ELM for classification based on feature space

  • PDF / 1,672,081 Bytes
  • 26 Pages / 439.37 x 666.142 pts Page_size
  • 51 Downloads / 222 Views

DOWNLOAD

REPORT


Fuzzy ELM for classification based on feature space Yonghe Chu 1 & Hongfei Lin 1 & Liang Yang 1 & Dongyu Zhang 1 & Shaowu Zhang 1 & Yufeng Diao 1 & Deqin Yan 2 Received: 5 December 2018 / Revised: 9 July 2019 / Accepted: 1 October 2019 # Springer Science+Business Media, LLC, part of Springer Nature 2019

Abstract As a competitive machine learning algorithm, extreme learning machine (ELM), with its simple theory and easy implementation, has been widely used in the field of pattern accuracy. Recently, researchers have proposed related research algorithms to accommodate noise and outlier data. With a proper fuzzy membership function, a fuzzy ELM can effectively reduce the effects of outliers when solving the classification problem. However, how to apply ELM for learning and accuracy in the presence of noise is still an important research topic. A novel fuzzy ELM (ANFELM) technique is proposed in this paper. In the algorithm, the membership degree of the sample is calculated in a feature mapping space instead of the data input space. The algorithm provides good performance in reducing the effects of outliers and significantly improves classification accuracy and generalization. Experiments on UCI datasets and textual datasets show that the proposed algorithm significantly improves the classification capability of ELM and is superior to other algorithms. Keywords Extreme learning machine . Classification . Membership degree . Feature mapping space

1 Introduction Extreme learning machine [8–11, 34](ELM) is proposed by Huang et al. as an extension of traditional single-hidden layer feedforward networks (SLFNs). ELM randomly generates input weights and the offset value of hidden layer nodes. Only the output weights of all the parameters are analysed and determined, and process for solving the traditional neural network is based on a linear model. A traditional neural network algorithm, such as the BP [25] neural network, adjusts the input weight and offset value of the hidden layer nodes by a gradient-

* Hongfei Lin [email protected]

1

Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China

2

School of Computer and Information Technology, Liaoning Normal University, Dalian 116081, China

Multimedia Tools and Applications

based method in an iterative manner. However, gradient descent-based methods have the disadvantages of high time complexity and easily trapping the search in a local optimal solution. Compared with traditional neural network algorithms, ELMs randomly generate the input weights and offset value of the hidden layer nodes. Therefore, ELMs can spend less time getting the optimal solution and require less human intervention in the training process than traditional neural network algorithms. It has been shown that even without updating the parameters of the hidden layer, an SLFN with randomly generated hidden neurons and tuneable output weights maintains its universal approximation capability [10, 37]. Compared to gradient-based algorithms, ELMs