Online Bayesian max-margin subspace learning for multi-view classification and regression
- PDF / 1,154,583 Bytes
- 31 Pages / 439.37 x 666.142 pts Page_size
- 17 Downloads / 183 Views
Online Bayesian max-margin subspace learning for multi-view classification and regression Jia He1,2,4 · Changying Du3,5 · Fuzhen Zhuang1,4 Guoping Long5
· Xin Yin1 · Qing He1,4 ·
Received: 23 April 2018 / Revised: 16 September 2019 / Accepted: 4 October 2019 © The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2019
Abstract Multi-view data have become increasingly popular in many real-world applications where data are generated from different information channels or different views such as image + text, audio + video, and webpage + link data. Last decades have witnessed a number of studies devoted to multi-view learning algorithms, especially the predictive latent subspace learning approaches which aim at obtaining a subspace shared by multiple views and then learning models in the shared subspace. However, few efforts have been made to handle online multiview learning scenarios. In this paper, we propose an online Bayesian multi-view learning algorithm which learns predictive subspace with the max-margin principle. Specifically, we first define the latent margin loss for classification or regression in the subspace, and then cast the learning problem into a variational Bayesian framework by exploiting the pseudolikelihood and data augmentation idea. With the variational approximate posterior inferred from the past samples, we can naturally combine historical knowledge with new arrival data, in a Bayesian passive-aggressive style. Finally, we extensively evaluate our model on several real-world data sets and the experimental results show that our models can achieve superior performance, compared with a number of state-of-the-art competitors. Keywords Multi-view learning · Online learning · Bayesian subspace learning · Max-margin · Classification · Regression
1 Introduction Nowadays, multi-view data are often generated from multiple information channels continuously, e.g., hundreds of YouTube videos consisting of visual, audio and text features are uploaded every minute. Different views usually contain complementary information, and multi-view learning can exploit this information to learn representation that is more expressive than that of single-view learning method. Therefore, multi-view representation learning has become a very promising topic with wide applicability. Multi-view learning arouses amounts of interests in the past decades (Zhao et al. 2017; Quang et al. 2013; Sun
Editor: Ulf Brefeld. Extended author information available on the last page of the article
123
Machine Learning
and Chao 2013; Li et al. 2016; Ye et al. 2015; Liu et al. 2016; Chen and Zhou 2018). Nowadays, there are many multi-view learning approaches, e.g., multiple kernel learning (Gönen and Alpaydın 2011), disagreement-based multi-view learning (Blum and Mitchell 1998), late fusion methods which combine outputs of the models constructed from different view features (Ye et al. 2012) and subspace learning methods for multi-view data (Chen et al. 2012). Among them, the multi-vie
Data Loading...