A density-based maximum margin machine classifier
- PDF / 1,328,102 Bytes
- 10 Pages / 595.276 x 790.866 pts Page_size
- 96 Downloads / 190 Views
(0123456789().,-volV)(0123456789().,-volV)
A density-based maximum margin machine classifier Jinsong Wang1 • Jiping Liao1 • Wei Huang2,3, Received: 18 September 2017 / Revised: 30 September 2019 / Accepted: 4 February 2020 Ó Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Classic support vector machine classifiers find separating hyperplanes by considering patterns of data sets, such as socalled support vectors without any character, i.e., without any global information concerning the relationship between one point and other points. In this study, we propose a density-based maximum margin machine classifier based on the idea of replacing support vectors with edge-points. Each edge-point of a data set is characterized by a density that represents the distance between the point and its neighbours. In some sense, the density character of a pattern (edge-point) is used here as global information relation the pattern to other points. To evaluate the performance of the proposed approach, we test it on several benchmark data sets. A comparative study demonstrates the advantages of our new approach. Keywords Classification Density-based maximum margin machine Support vectors Edge-points
1 Introduction The past decades have witnessed the development of various large margin classifiers that have been widely used in many applications [1–6]. In the design of these classifiers, one pattern of data is considered as one point, while one label of patterns represents one class. The underlying ideas of these large margin classifiers is essentially to construct the optimal separating hyperplane among different labels (classes) of patterns. These large margin classifiers obtain decision hyperplane based on support vectors. For simplicity, let us take the support vector machine (SVM) as an example. It is well-known that the decision hyperplane of the SVM is determined by so-called support vectors that are essentially selected patterns (points) of the data. Any information concerning the relationship between these support vectors
& Wei Huang [email protected] 1
School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China
2
School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
3
Tianjin Key Lab of Intelligent Computing and Novel Software Technology, Tianjin University of Technology, Tianjin, China
and any other points (e.g., distance) is irrelevant to the hyperplane. There have been a number of studies concentrating on SVMs. Tang et al. [7] have proposed a novel sparse group feature selection method for a multiclass SVM. Xing and Ji [8] have proposed a novel robust oneclass SVM (OCSVM) based on the rescaled hinge loss function. Tang et al. [9] have proposed a regular simplex SVM (RSSVM) for K-class classification from a novel perspective. Tanveer et al. [10] have proposed a novel sparse pinball twin SVMs. Vijayarajeswari et al. [11] presented a hybrid classifier based on SVM and the Hough transform. All these studies have disc
Data Loading...