Ordered smooth representation clustering
- PDF / 3,470,320 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 66 Downloads / 182 Views
ORIGINAL ARTICLE
Ordered smooth representation clustering Liping Chen1,2 · Gongde Guo1,2 Received: 31 January 2019 / Accepted: 15 September 2019 © Springer-Verlag GmbH Germany, part of Springer Nature 2019
Abstract The smooth representation (SMR) model is a widely used segmentation method in computer vision. This model adopts the K nearest neighbour (KNN) graph to select samples for representation. All neighbours in the KNN graph are assumed to be equally important candidates. In this paper, we use special weights that are calculated by a novel cross-view kernel function to evaluate the contributions of neighbours to the subspace clustering in SMR. The neighbours that are found by the Gaussian similarity formula can be considered long-range similar neighbours. We add another item to accurately reflect the order relation in the cross-view kernel function. This addition allows the kernel function to generalize the conventional SMR method for sequential data. The ordered smooth representation (OSMR) model outperforms other representative space clustering methods on public datasets, namely, the UCI database, the USPS database, Yale B datasets, the Freiburg–Berkeley Motion Segmentation database and a real-world mobile video that was captured by a smart phone. Keywords Subspace clustering · Smooth representation clustering · Kernel function · KNN graph
1 Introduction With the advance of the new generation of artificial intelligence, the computer machine vision field faces some fundamental challenges, such as the handling of ultra-high dimensional data and the vast numbers of data samples, a lack of inner low-dimensional prior knowledge of data structure, and the susceptibility of video data to noise distortion. At the same time with the growth of smart phone and computer networks, high-dimensional data, such as small videos and image sequences, have become increasingly popular in our daily lives. Substantial attention has been paid to the automatic clustering of these data. These data often lie in an intrinsic low-dimensional subspace, with noises and outliers. The subspace clustering method is a promising research method that aims to segment data into a union of subspaces [1] to explore their underlying subspace structures. Each * Gongde Guo [email protected] Liping Chen [email protected] 1
School of Mathematics and Informatics, Fujian Normal University, Fuzhou, China
Digital Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fuzhou, China
2
cluster belongs to only one subspace. The most widely used subspace clustering methods, including sparse subspace clustering (SSC) [2], low-rank representation (LRR) [3–6] and smooth representation clustering (SMR) [7], are derived from spectral clustering methods. If each subspace is independent, these subspace clustering methods all learn the affinity matrix by calculation. SSC minimizes the 1-norm to self-represent samples, LRR minimizes norm * to emphasize the low-rank property of the affinity matrix, and SMR uses a sparse induced Laplace prior aware
Data Loading...