Rigidly Self-Expressive Sparse Subspace Clustering

Sparse subspace clustering is a well-known algorithm, and it is widely used in many research field nowadays, and a lot effort has been contributed to improve it. In this paper, we propose a novel approach to obtain the coefficient matrix. Compared with tr

  • PDF / 1,179,772 Bytes
  • 14 Pages / 439.37 x 666.142 pts Page_size
  • 90 Downloads / 198 Views

DOWNLOAD

REPORT


College of Computer, National University of Defense Technology, Changsha 410073, China 2 National Laboratory for Parallel and Distributed Processing, National University of Defense Technology, Changsha 410073, China {qiao.linbo,bfzhang,yipinsun,sjs}@nudt.edu.cn

Abstract. Sparse subspace clustering is a well-known algorithm, and it is widely used in many research field nowadays, and a lot effort has been contributed to improve it. In this paper, we propose a novel approach to obtain the coefficient matrix. Compared with traditional sparse subspace clustering (SSC) approaches, the key advantage of our approach is that it provides a new perspective of the self-expressive property. We call it rigidly self-expressive (RSE) property. This new formulation captures the rigidly self-expressive property of the data points in the same subspace, and provides a new formulation for sparse subspace clustering. Extensions to traditional SSC could also be cooperating with this new formulation. We present a first-order algorithm to solve the nonconvex optimization, and further prove that it converges to a KKT point of the nonconvex problem under certain standard assumptions. Extensive experiments on the Extended Yale B dataset, the USPS digital images dataset, and the Columbia Object Image Library shows that for images with up to 30 % missing pixels the clustering quality achieved by our approach outperforms the original SSC. Keywords: Sparse subspace clustering Optimization method

1

·

Rigidly self-expressive

·

Introduction

Subspace clustering naturally arises with the emergence of high-dimensional data. It refers to the problem of finding multiple low-dimensional subspaces underlying a collection of data points sampled from a high-dimensional space and simultaneously partitioning these data into subspaces. Making use of the low intrinsic dimension of input data and the assumption of data lying in a union of linear or affine subspaces, subspace clustering assigns each data point to a subspace, in which data points residing in the same subspace belong to the same cluster. Subspace clustering has been applied to various areas such as computer vision [11], L. Qiao—The work was partially supported by the National Natural Science Foundation of China under Grant No. 61303264 and Grant No. 61202482. c Springer International Publishing Switzerland 2016  H. Cao et al. (Eds.): PAKDD 2016 Workshops, LNAI 9794, pp. 101–114, 2016. DOI: 10.1007/978-3-319-42996-0 9

102

L. Qiao et al.

signal processing [16], and bioinformatics [13]. In the past two decades, numerous algorithms to subspace clustering have been proposed, including K-plane [3], GPCA [24], Spectral Curvature Clustering [4], Low Rank Representation (LRR) [15], Sparse Subspace Clustering (SSC) [7], etc. Among these algorithms, the recent work of Sparse Subspace Clustering (SSC) has been recognized to enjoy promising empirical performance. In this paper, we propose a novel approach beyond SSC to obtain the coefficient matrix. Compared with the approaches mentioned above, the key advanta