Block-wise weighted sparse representation-based classification

  • PDF / 723,172 Bytes
  • 8 Pages / 595.276 x 790.866 pts Page_size
  • 111 Downloads / 325 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

Block-wise weighted sparse representation-based classification Ulises Rodríguez-Domínguez1

· Oscar Dalmau1

Received: 22 August 2019 / Revised: 27 February 2020 / Accepted: 23 April 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract We present a block-wise weighted sparse representation-based classification (BW-SRC) method, an extension of sparse representation-based classification (SRC), useful when the input features can be treated in a block-wise manner. We apply the method, which extracts features (coefficients) embedded in a supervised scheme on three known datasets. We show that, depending on the blocks of features used, our method outperforms linear dictionary learning and sparse coding methods without learning a dictionary. As an additional advantage of our method, the input feature blocks are automatically given an interpretation based on their importance for their ease of representation. Keywords Sparse representation · Weighted features · Classification · Robust feature extraction

1 Introduction Dictionary learning is a technique which can be seen as a matrix factorization problem that aims to represent some data matrix by a linear combination of an underlying basis. Both the basis and the coefficients of the linear combination are learned, and it is hypothesized that the data of interest lie in such basis. On the other hand, if only the coefficients, which are typically assigned sparsity constraints, are learned with a fixed basis, this is known as sparse coding. The vectors comprising this basis are called atoms. This leads to a scheme with a very versatile representational capacity, which is the reason that both techniques have been used in a wide variety of applications in both signal processing and signal classification. Considering signal processing vs signal classification, dictionary learning can be divided into unsupervised dictionary learning vs supervised dictionary learning. Zou [1] proposed to weight the sparse coefficients learned for an estimation problem in a Lasso scenario to give relevance to the response variables. Wright et al. [2] proposed the sparse representation-based classification (SRC) method. This method does not learn a dictionary and considers l1 regularization on the learned coefficients, i.e., sparse cod-

B

Ulises Rodríguez-Domínguez [email protected] Oscar Dalmau [email protected]

1

Mathematics Research Center, Guanajuato, Guanajuato, Mexico

ing, together with an auto-contained classifier. Mairal et al. [3] proposed a scalable online dictionary learning algorithm based on stochastic approximations, which belongs to the unsupervised scheme. Yang et al. [4] proposed a structured dictionary learning method in a supervised scheme based on the Fisher discrimination criterion. Also, within the supervised scheme Tuysuzoglu et al. [5] proposed to learn ensembles of dictionaries per class using feature subspaces. Zhang et al. [6] proposed to learn dictionaries by class in a high-dimensional space through the kernel trick within

Data Loading...