Discriminative low-rank projection for robust subspace learning

  • PDF / 3,273,735 Bytes
  • 14 Pages / 595.276 x 790.866 pts Page_size
  • 22 Downloads / 245 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

Discriminative low‑rank projection for robust subspace learning Zhihui Lai1,2 · Jiaqi Bao1,2 · Heng Kong3 · Minghua Wan4 · Guowei Yang4 Received: 7 July 2019 / Accepted: 25 February 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract The robustness to outliers, noises, and corruptions has been paid more attention recently to increase the performance in linear feature extraction and image classification. As one of the most effective subspace learning methods, low-rank representation (LRR) can improve the robustness of an algorithm by exploring the global representative structure information among the samples. However, the traditional LRR cannot project the training samples into low-dimensional subspace with supervised information. Thus, in this paper, we integrate the properties of LRR with supervised dimensionality reduction techniques to obtain optimal low-rank subspace and discriminative projection at the same time. To achieve this goal, we proposed a novel model named Discriminative Low-Rank Projection (DLRP). Furthermore, DLRP can break the limitation of the small class problem which means the number of projections is bound by the number of classes. Our model can be solved by alternatively linearized alternating direction method with adaptive penalty and the singular value decomposition. Besides, the analyses of differences between DLRP and previous related models are shown. Extensive experiments conducted on various contaminated databases have confirmed the superiority of the proposed method. Keywords  Feature selection · Low-rank representation · Image classification · Small-class problem · Subspace learning

1 Introduction Dimensionality reduction is one of the most effective and simple techniques in machine learning and classification problem [1, 2]. It is proposed to solve the critical problem * Heng Kong [email protected] Zhihui Lai [email protected] Jiaqi Bao [email protected] Minghua Wan [email protected] Guowei Yang [email protected] 1



College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China

2



Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen 518060, China

3

BaoAn Central Hospital of Shenzhen, Shenzhen 518060, China

4

School of Information Engineering, Nanjing Audit University, Nanjing 211815, Jiangsu, P.R. China



called “curse of dimensionality” [3] which means models cannot perform well when processing high-dimensional data, even cause the singular problem. Recently, many classification models based on image recognition have been proposed, such as multiple-instance learning (MIL) [4] and subspace learning. The most classical subspace learning methods include principal component analysis (PCA) [5–7] and Linear discriminant analysis (LDA) [8–10]. PCA aims to project the high dimensionality data into the low dimensionality subspace along the direction of the maximum variance and LDA needs to find the optimal discriminative subspace so that the ratio of between-class scatter to w