Laplacian regularized low-rank sparse representation transfer learning

  • PDF / 1,630,367 Bytes
  • 15 Pages / 595.276 x 790.866 pts Page_size
  • 82 Downloads / 194 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

Laplacian regularized low‑rank sparse representation transfer learning Lin Guo1 · Qun Dai1 Received: 28 December 2019 / Accepted: 18 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract In unsupervised transfer learning, it is extremely valuable to effectively extract knowledge from the vast amount of untagged data that exists by utilizing tagged data from other similar databases. In general, the data in the real world often resides in the low-dimensional manifold embedded in the high-dimensional environment space. However, the current subspace transfer learning methods do not consider the nonlinear geometry structure inside the data, so the local similarity information between the data may be lost in the learning process. In order to improve this respect, we propose a new subspace transfer learning algorithm, namely Laplacian Regularized Low-Rank Sparse Representation Transfer Learning (LRLRSR-TL). After introducing the low-rank representation and sparse constraints, the method incorporates Laplacian regularization term to represent the global low-dimensional structure and capture the inherent nonlinear geometry information of the data. Experimental investigation conducted based on five different cross-domain visual image datasets shows that the proposed method has outstanding performance compared with several state-of-the-art transfer learning methods. Keywords  Transfer learning · Representation matrix reconstruction · Regularization · Subspace learning

1 Introduction Most common classification methods apply the classification model learned from training samples to test samples with the same distribution. Unfortunately, in complex applications, trained models often fall short, when the distribution of training and test data distinguish from each other due to various environments, sensor types, visual resolutions, illuminations and other factors. A straightforward solution to settle this problem is to collect adequate labeled data that well characterizes the distribution of the test data, and then use them to retrain the model. However, collecting and tagging enough data is time consuming and expensive. Tagging large amounts of data in a new domain is costly and impractical. Based on this scenario, we can leverage existing marked and relevant data sets to improve classification performance, which is the solution provided by the transfer learning paradigm. Transfer learning can fully utilize the correlation between the data of two domains, and transfer the * Qun Dai [email protected] 1



College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China

useful knowledge learned from the source domain data to the target samples to complete the task in the new target domain [1–3]. Transfer learning indicates that different but related data samples can promote the learning procedure of new tasks. It has been proven that if the new knowledge learned from the relevant data is used properly, it can improve the lear