Low-Rank Discriminative Adaptive Graph Preserving Subspace Learning
- PDF / 2,144,196 Bytes
- 23 Pages / 439.37 x 666.142 pts Page_size
- 10 Downloads / 186 Views
Low-Rank Discriminative Adaptive Graph Preserving Subspace Learning Haishun Du1
· Yuxi Wang2 · Fan Zhang1 · Yi Zhou2
Accepted: 29 August 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract The global and local geometric structures of data play a key role in subspace learning. Although many manifold-based subspace learning methods have been proposed for preserving the local geometric structure of data, they usually use a predefined neighbor graph to characterize it. However, the predefined neighbor graph might be not optimal since it keeps fixed during the subsequent subspace learning process. Moreover, most manifold-based subspace learning methods ignore the global structure of data. To address these issues, we propose a low-rank discriminative adaptive graph preserving (LRDAGP) subspace learning method for image feature extraction and recognition by integrating the low-rank representation , adaptive manifold learning, and supervised regularizer into a unified framework. To capture the optimal local geometric structure of data for subspace learning, LRDAGP adopts an adaptive manifold learning strategy that the neighbor graph is adaptively updated during the subspace learning process. To capture the optimal global structure of data for subspace learning, LRDAGP also seeks the low-rank representations of data in a low-dimensional subspace during the subspace learning process. Moreover, for improving the discrimination ability of the learned subspace, a supervised regularizer is designed and incorporated into the LRDAGP model. Experimental results on several image datasets show that LRDAGP is effective for image feature extraction and recognition. Keywords Low-rank constraints · Graph preserving · Subspace learning · Feature extraction
B
Haishun Du [email protected] Yuxi Wang [email protected] Fan Zhang [email protected] Yi Zhou [email protected]
1
School of Computer and Information Engineering, Henan University, Kaifeng, China
2
Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, China
123
H. Du et al.
1 Introduction Subspace learning has attracted much attention in the past decades. The basic idea of subspace learning is to seek a projection matrix that projects the data of an original high-dimensional subspace into a low-dimensional feature subspace. Many subspace learning methods have been proposed in the past decades. Among them, principal component analysis (PCA) [1] and linear discriminant analysis (LDA) [2] are two most well-known methods. However, PCA, LDA and their variations [3–6] only focus on the Euclidean structure and fail to discover the local geometrical structure of data. So, these classical subspace learning methods can not faithfully approximate the data distribution in the low-dimensional feature subspace. Inspired by nonlinear manifold learning based techniques [7–9], many linear manifold-based subspace learning methods [10–17], such as LPP [10], MFA [13], and RILPP [15], have been developed for discovering the local
Data Loading...