An adaptive kernel sparse representation-based classification

  • PDF / 1,846,593 Bytes
  • 11 Pages / 595.276 x 790.866 pts Page_size
  • 48 Downloads / 255 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

An adaptive kernel sparse representation‑based classification Xuejun Wang1 · Wenjian Wang2 · Changqian Men1 Received: 16 December 2019 / Accepted: 24 February 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract In recent years, scholars have attached increasing attention to sparse representation. Based on compressed sensing and machine learning, sparse representation-based classification (SRC) has been extensively in classification. However, SRC is not suitable for samples with non-linear structures which arise in many practical applications. Meanwhile, sparsity is overemphasized by SRC, but the correlation information which is of great importance in classification is overlooked. To address these shortcomings, this study puts forward an adaptive kernel sparse representation-based classification (AKSRC). First, the samples were mapped to a high-dimensional feature space from the original feature space. Second, after selecting a suitable kernel function, a sample is represented as the linear combination of training samples of same class. Further more, the trace norm is adopted in AKSRC which is different from general approaches. It’s adaptive to the structure of dictionary which means that a better linear representation which has the most discriminative samples can be obtained. Therefore, AKSRC has more powerful classification ability. Finally, the advancement and effectiveness of the proposed AKSRC are verified by carrying out experiments on benchmark data sets. Keywords  Sparse representation · Trace norm · Sparsity · Correlation · Kernel function

1 Introduction Classifier design is a popular technology in pattern recognition. Scholars have proposed various classification methods [1, 2]. With good availability and simplicity, the nearestneighbor (NN) classifier has an extensive use. NN classifier classifies samples to which the nearest sample belongs. As kernel principal component analysis (KPCA) [3], kernel fisher discriminant analysis (KFD) [4] and some other kernel-based algorithms continuously develop, kernel techniques are increasingly used in pattern recognition and data mining. The input data nonlinearly mapped into a highdimensional feature space is calculated through a kernel function. Yu et al. put forward the kernel nearest neighbor (Kernel-NN) classifier [5]. The nearest neighbor classifier is used by Kernel-NN in the high-dimensional feature space, * Wenjian Wang [email protected] 1



School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi, China



Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan 030006, Shanxi, China

2

thus changing the sample distribution. Consequently, the nonlinear separated samples in the original feature space are converted to linear separated samples in the high-dimensional feature space. According to the experiment results, with a suitable kernel, the Kernel-NN classifier has better performance compared with the or