Graph-regularized multi-view semantic subspace learning
- PDF / 2,668,051 Bytes
- 17 Pages / 595.276 x 790.866 pts Page_size
- 83 Downloads / 209 Views
ORIGINAL ARTICLE
Graph‑regularized multi‑view semantic subspace learning Jinye Peng1 · Peng Luo1 · Ziyu Guan1 · Jianping Fan2 Received: 18 September 2017 / Accepted: 5 December 2017 © The Author(s) 2017. This article is an open access publication
Abstract Many real-world datasets are represented by multiple features or modalities which often provide compatible and complementary information to each other. In order to obtain a good data representation that synthesizes multiple features, researchers have proposed different multi-view subspace learning algorithms. Although label information has been exploited for guiding multi-view subspace learning, previous approaches did not well capture the underlying semantic structure in data. In this paper, we propose a new multi-view subspace learning algorithm called multi-view semantic learning (MvSL). MvSL learns a nonnegative latent space and tries to capture the semantic structure of data by a novel graph embedding framework, where an affinity graph characterizing intra-class compactness and a penalty graph characterizing inter-class separability are generally defined. The intuition is to let intra-class items be near each other while keeping inter-class items away from each other in the learned common subspace across multiple views. We explore three specific definitions of the graphs and compare them analytically and empirically. To properly assess nearest neighbors in the multi-view context, we develop a multiple kernel learning method for obtaining an optimal kernel combination from multiple features. In addition, we encourage each latent dimension to be associated with a subset of views via sparseness constraints. In this way, MvSL is able to capture flexible conceptual patterns hidden in multi-view features. Experiments on three real-world datasets demonstrate the effectiveness of MvSL. Keywords Multi-view learning · Nonnegative matrix factorization · Graph embedding · Multiple kernel learning · Structured sparsity
1 Introduction In many real-world data analytic problems, instances (items) are often described with multiple modalities or views. It becomes natural to integrate multi-view information to obtain a more robust representation, rather than relying on a single view. A good integration of multi-view features * Peng Luo [email protected] Jinye Peng [email protected] Ziyu Guan [email protected] Jianping Fan [email protected] 1
College of Information and Technology, Northwest University of China, Xi’an 710127, China
Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
2
can lead to a more comprehensive description of the data items, which could improve performance of many related applications. An active area of multi-view learning is multi-view latent subspace learning, which aims to obtain a compact latent representation by taking advantage of inherent structures and relations across multiple views. A pioneering technique in this area is canonical correlation analysis (CCA) [1], which tries to learn the
Data Loading...