Mutual-manifold regularized robust fast latent LRR for subspace recovery and learning
- PDF / 2,078,944 Bytes
- 14 Pages / 595.276 x 790.866 pts Page_size
- 27 Downloads / 212 Views
(0123456789().,-volV)(0123456789(). ,- volV)
ORIGINAL ARTICLE
Mutual-manifold regularized robust fast latent LRR for subspace recovery and learning Xianzhen Li1 • Zhao Zhang1,3 • Li Zhang1,2 • Meng Wang3 Received: 13 August 2019 / Accepted: 11 December 2019 Springer-Verlag London Ltd., part of Springer Nature 2019
Abstract In this paper, we propose a simple yet effective low-rank representation (LRR) and subspace recovery model called mutual-manifold regularized robust fast latent LRR. Our model improves the representation ability and robustness from twofold. Specifically, our model is built on the Frobenius norm-based fast latent LRR decomposing given data into a principal feature part, a salient feature part and a sparse error, but improves it clearly by designing mutual-manifold regularization to encode, preserve and propagate local information between coefficients and salient features. The mutualmanifold regularization is defined by using the coefficients as the adaptive reconstruction weights for salient features and constructing a Laplacian matrix over salient features for the coefficients. Thus, some important local topology structure information can be propagated between them, which can make the discovered subspace structures and features potentially more accurate for the data representations. Besides, our approach also considers to improve the robust properties of subspace recovery against noise and sparse errors in coefficients, which is realized by decomposing original coefficients matrix into an error-corrected part and a sparse error part fitting noise in coefficients, and the recovered coefficients are then used for robust subspace recovery. Experimental results on several public databases demonstrate that our method can outperform other related algorithms. Keywords Robust fast Latent LRR Subspace recovery and learning Mutual-manifold regularization Feature extraction
1 Introduction Subspace recovery, clustering and learning of the highdimensional real application data by low-rank coding and approximation has been extracting considerable attention in recent years in the areas of data mining and machine learning [1–21, 37–39]. The most representative low-dimensional learning models are principal component analysis (PCA) [7] and its robust version, Robust PCA (RPCA)
& Zhao Zhang [email protected] 1
School of Computer Science and Technology, Soochow University, Suzhou 215006, China
2
Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023, China
3
Key Laboratory of Knowledge Engineering with Big Data (Ministry of Education) and School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
[10]. The conventional PCA computes a low-dimensional feature subspace by eigen-decomposition to embed original high-dimensional data, while RPCA improves the robustness of PCA by decomposing given data into a low-rank recovered component and an error part by optimizing a convex nuclear norm minimization-based model. Bu
Data Loading...