Fisher-regularized supervised and semi-supervised extreme learning machine
- PDF / 738,073 Bytes
- 33 Pages / 439.37 x 666.142 pts Page_size
- 83 Downloads / 224 Views
Fisher-regularized supervised and semi-supervised extreme learning machine Jun Ma1 · Yakun Wen1 · Liming Yang2 Received: 13 June 2018 / Accepted: 30 June 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract The structural information of data contains useful prior knowledge and thus is important for designing classifiers. Extreme learning machine (ELM) has been a potential technique in handling classification problems. However, it only simply considers the prior class-based structural information and ignores the prior knowledge from statistics and geometry of data. In this paper, to capture more structural information of the data, we first propose a Fisherregularized extreme learning machine (called Fisher-ELM) by applying Fisher regularization into the ELM learning framework, the main goals of which is to build an optimal hyperplane such that the output weight and within-class scatter are minimized simultaneously. The proposed Fisher-ELM reflects both the global characteristics and local properties of samples. Intuitively, the Fisher-ELM can approximatively fulfill the Fisher criterion and can obtain good statistical separability. Then, we exploit graph structural formulation to obtain semi-supervised Fisher-ELM version (called Lap-FisherELM) by introducing manifold regularization that characterizes the geometric information of the marginal distribution embedded in unlabeled samples. An efficient successive overrelaxation algorithm is used to solve the proposed Fisher-ELM and Lap-FisherELM, which converges linearly to a solution, and can process very large datasets that need not reside in memory. The proposed Fisher-ELM and Lap-FisherELM do not need to deal with the extra matrix and burden the computations related to the variable switching, which makes them more suitable for relatively large-scale problems. Experiments on several datasets verify the effectiveness of the proposed methods. Keywords Extreme learning machine · Semi-supervised learning · Within-class scatter · Fisher regularization · Manifold regularization
B
Liming Yang [email protected]
1
College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
2
College of Science, China Agricultural University, Beijing 100083, China
123
J. Ma et al.
1 Introduction Single-layer hidden layer feed-forward neural networks (SLFNs) [1] have received extensive attention and in-depth research in recent decades. It is well known that most learning algorithms for training SLFNs use gradient methods to optimize weights in the network, such as the famous back-propagation algorithm [2] and the Levenberg–Marquardt algorithm [3]. In addition, some methods also adopt forward selection or backward elimination to dynamically construct the network in the training process [4,5]. However, one of the two challenges and difficulties faced by SLFNs is that most of the algorithms described above for training SLFNs do not guarantee global optimality. As one of the most successful SLFNs training algorithms, support
Data Loading...