Partially disentangled latent relations for multi-label deep learning

  • PDF / 3,629,223 Bytes
  • 26 Pages / 595.276 x 790.866 pts Page_size
  • 25 Downloads / 180 Views

DOWNLOAD

REPORT


(0123456789().,-volV)(0123456789().,-volV)

ORIGINAL ARTICLE

Partially disentangled latent relations for multi-label deep learning Si-ming Lian1 • Jian-wei Liu1 • Run-kun Lu1 • Xiong-lin Luo1 Received: 25 February 2020 / Accepted: 23 September 2020 Ó Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract For multi-label learning, the specific features are extracted from the instances under the supervised of class label is meaningful, and the ‘‘purified’’ feature representation can also be shared with other features during learning process. Besides, it is essential to distinguish the inter-instance relations in input space and inter-label correlation relations in the output space on the multi-label datasets, which is conducive to improve the performance of the multi-label algorithm. However, most current multi-label algorithms aim to capture the mapping between instances and labels, while ignoring the information about instance relations and label correlations in the multi-label data structure. Motivated by these issues, we leverage the deep network to learn the special feature representations for multi-label components without abandoning overlapped features which may belong to other multi-label components. Meanwhile, the Euclidean matrices are leveraged to construct the diagonal matrix for the diffusion function, obtaining the new class latent representation by graph-based diffusion method preserve the inter-instance relations; it ensures that similar features have similar label sets. Further, considering that the contributions of these feature representation are different and have distinct influences on the final multi-label prediction results, the self-attention mechanism is introduced to fusion the other label-specific instance features to build the new joint feature representation, which derives dynamic weights for multi-label prediction. Finally, experimental results on the real data sets show promising wide availability for our approach. Keywords Disentangled latent relations  Diffusion  Self-attention  Feature representation  Multi-label learning

1 Introduction Multi-label learning nowadays becomes interesting because multi-label paradigm can grasp the complex relations among objects; the deep learning has been incorporated into multi-label learning, which makes it widely used in many practical areas, such as recommendation system, the spam classification, and pattern recognition. & Jian-wei Liu [email protected] Si-ming Lian [email protected] Run-kun Lu [email protected] Xiong-lin Luo [email protected] 1

Department of Automation, College of Information Science and Engineering, China University of Petroleum (CUP), Beijing Campus, Changping Distinct, 260 Mailbox, Beijing 102249, China

Meanwhile, how to deal with the relations hidden behind the multi-label data sets is also a key point. One of the fundamental issues for multi-label learning is how to learn the mapping relations between the instances and label sets; the training samples are used to seek such mapping relations, to predict the m