Projection multi-birth support vector machinea for multi-classification
- PDF / 1,356,139 Bytes
- 17 Pages / 595.224 x 790.955 pts Page_size
- 104 Downloads / 178 Views
Projection multi-birth support vector machinea for multi-classification Yakun Wen1 · Jun Ma1 · Chao Yuan1 · Liming Yang2
© Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract As an important multi-classification learning tool, multi-birth support vector machine (MBSVM) has been widely studied and applied due to its low computational complexity and good generalization. In this paper, a new multi-birth support vector machine is proposed to handle multi-class classification problem, called projection multi-birth support vector machine (PMBSVM). Specifically, we intend to seek a projection direction wk for k-th class, so that the covariance of remaining samples (except the k-th class) is as small as possible, and the samples of k-th class are as far as possible from the mean of the remaining samples. The proposed PMBSVM not only inherits the advantages of MBSVM, but also can find a suitable projection direction for each class so that the sample is separable in the projection space. Additionally, a regularization term is introduced to maximize the margin of different classes in the projected space. Moreover, a recursive PMBSVM algorithm is proposed for generating multiple orthogonal projection directions for each class. Then we extend the proposed approaches to nonlinear situations through kernel technology. Simulation results on benchmark datasets show that the proposed algorithms improve the generalization in most cases. Keywords Multi-classification · Multi-birth support vector machine · Projection multi-birth support vector machine
1 Introduction Support Vector Machine (SVM) [1] is a powerful binary classification learning strategy derived from statistical learning theory. It is well known that classical SVMs construct two parallel hyperplanes by maximizing the width between two hyperplanes while minimizing training errors. Due to its perfect mathematical theory foundation and excellent generalization ability, SVM has been widely concerned by many researchers and successfully applied in data analysis [2–4]. However, SVMs are not suitable for handling large-scale problems. Several improved SVM algorithms have been presented to speed up training process, such as sequential minimal optimization (SMO) [5], chunking algorithm [6], etc. Mangasarian brought forward generalized eigenvalue proximal support vector Liming Yang
[email protected] 1
College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, China
2
College of Science, China Agricultural University, Beijing, 100083, China
machine (GEPSVM) [7, 8]. It relaxes the parallelism requirements of hyperplanes and attempts to create a pair of non-parallel hyperplanes such that each plane approaches one of the two classes and away from the other. The classification hyperplane is obtained by solving two generalized eigenvalue problems. This algorithm has low computational burden, but its generalization ability is poor. Subsequently, Jayadeva et al. proposed a non-parallel hyperplane classification algori
Data Loading...