Multiplication fusion of sparse and collaborative-competitive representation for image classification

  • PDF / 2,112,692 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 26 Downloads / 175 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

Multiplication fusion of sparse and collaborative‑competitive representation for image classification Zi‑Qi Li1,2 · Jun Sun1,2 · Xiao‑Jun Wu1,2 · He‑Feng Yin1,2 Received: 18 June 2019 / Accepted: 6 April 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Representation based classification methods have become a hot research topic during the past few years, and the two most prominent approaches are sparse representation based classification (SRC) and collaborative representation based classification (CRC). CRC reveals that it is the collaborative representation rather than the sparsity that makes SRC successful. Nevertheless, the dense representation of CRC may not be discriminative which will degrade its performance for classification tasks. To alleviate this problem to some extent, we propose a new method called sparse and collaborative-competitive representation based classification (SCCRC) for image classification. Firstly, the coefficients of the test sample are obtained by SRC and CCRC, respectively. Then the fused coefficient is derived by multiplying the coefficients of SRC and CCRC. Finally, the test sample is designated to the class that has the minimum residual. Experimental results on several benchmark databases demonstrate the efficacy of our proposed SCCRC. The source code of SCCRC is accessible at https​://githu​b.com/ li-zi-qi/SCCRC​. Keywords  Representation based classification methods · Sparse representation · Collaborative representation · Collaborative-competitive representation based classification

1 Introduction Representation based classification methods (RBCM) have already gained increasing attention in various research fields, e.g. character recognition [1], person re-identification [2] and hyperspectral image classification [3]. SRC [4] is a pioneering work of RBCM, which directly uses all the training data as the dictionary to represent the test sample, and classifies the test sample by checking which class leads to the minimal reconstruction error. SRC solves an 𝓁1-norm optimization problem, and thus when the size of dictionary is huge, the sparse decomposition process may be very slow. One way to speed up the sparse coding process is to reduce the size of dictionary by selecting representative training samples. Li et al. [5] proposed a local sparse representation * Jun Sun [email protected] 1



School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China



Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China

2

based classification (LSRC) scheme, which performs sparse decomposition in local neighborhood. Similarly, Zhang et al. [6] presented KNN-SRC, which chooses K nearest neighbors of a testing sample from all the training samples to represent the testing sample. Ortiz et al. [7] developed a linearly approximated sparse representation-based classification (LASRC) algorithm that employs linear regression to perform sample selecti