\(\ell ^{0}\) -Sparse Subspace Clustering

Subspace clustering methods with sparsity prior, such as Sparse Subspace Clustering (SSC) [1 ], are effective in partitioning the data that lie in a union of subspaces. Most of those methods require certain assumptions, e.g. independence or disjointness,

  • PDF / 344,759 Bytes
  • 17 Pages / 439.37 x 666.142 pts Page_size
  • 99 Downloads / 188 Views

DOWNLOAD

REPORT


Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, USA {yyang58,t-huang1}@illinois.edu 2 Department of ECE, National University of Singapore, Singapore, Singapore [email protected] 3 Microsoft Research, Redmond, USA [email protected] 4 Snapchat, Los Angeles, USA [email protected]

Abstract. Subspace clustering methods with sparsity prior, such as Sparse Subspace Clustering (SSC) [1], are effective in partitioning the data that lie in a union of subspaces. Most of those methods require certain assumptions, e.g. independence or disjointness, on the subspaces. These assumptions are not guaranteed to hold in practice and they limit the application of existing sparse subspace clustering methods. In this paper, we propose 0 -induced sparse subspace clustering (0 -SSC). In contrast to the required assumptions, such as independence or disjointness, on subspaces for most existing sparse subspace clustering methods, we prove that subspace-sparse representation, a key element in subspace clustering, can be obtained by 0 -SSC for arbitrary distinct underlying subspaces almost surely under the mild i.i.d. assumption on the data generation. We also present the “no free lunch” theorem that obtaining the subspace representation under our general assumptions can not be much computationally cheaper than solving the corresponding 0 problem of 0 -SSC. We develop a novel approximate algorithm named Approximate 0 -SSC (A0 -SSC) that employs proximal gradient descent to obtain a sub-optimal solution to the optimization problem of 0 -SSC with theoretical guarantee, and the sub-optimal solution is used to build a sparse similarity matrix for clustering. Extensive experimental results on various data sets demonstrate the superiority of A0 -SSC compared to other competing clustering methods. This material is based upon work supported by the National Science Foundation under Grant No. 1318971. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The work of Jiashi Feng was partially supported by National University of Singapore startup grant R-263-000C08-133 and Ministry of Education of Singapore AcRF Tier One grant R-263-000C21-112. Electronic supplementary material The online version of this chapter (doi:10. 1007/978-3-319-46475-6 45) contains supplementary material, which is available to authorized users. c Springer International Publishing AG 2016  B. Leibe et al. (Eds.): ECCV 2016, Part II, LNCS 9906, pp. 731–747, 2016. DOI: 10.1007/978-3-319-46475-6 45

732

Y. Yang et al. Keywords: Sparse subspace clustering

1

· Proximal gradient descent

Introduction

High dimensional data often lie in a set of low-dimensional subspaces in many practical scenarios. Based on this observation, subspace clustering algorithms [2] aim to partition the data such that data belonging to the same subspace are identified as one cluster. Among various subspace clustering algorithms, the