MLK-SVD, the new approach in deep dictionary learning

  • PDF / 1,386,870 Bytes
  • 9 Pages / 595.276 x 790.866 pts Page_size
  • 99 Downloads / 232 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

MLK-SVD, the new approach in deep dictionary learning Azadeh Montazeri1

· Mahboubeh Shamsi2 · Rouhollah Dianat1

© Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract The aim of this study is to improve the classification efficiency of advanced methods using a multilayered dictionary learning framework. This paper presents the new idea of “multilayered K-singular value decomposition (MLK-SVD)” dictionary learning as a multilayer method of classification. This method starts by building a sparse representation at the patch level and relies on a hierarchy of learned dictionaries to output a global sparse representation for the whole image. In this research using class labels of training data, the label information is associated with each dictionary item (columns of the dictionary matrix) to enforce discrimination in sparse codes during the dictionary learning process. Also, this algorithm instead of learning one shallow dictionary learned multiple levels of dictionaries. The proposed formulation of deep dictionary learning provides the basis to develop more efficient dictionary learning algorithms. It relies on a succession of sparse coding and pooling steps in order to find an efficient representation of the data for classification. The performance of the proposed method is evaluated on MNIST and CIFAR-10 datasets, and results show that this method can help in advancing the state of the art. Keywords Multilayered · K-singular · Value decomposition (MLK-SVD) · Sparse representation · Deep learning · Classification

1 Introduction Mathematical models for successful data description include several types of dictionary learning and sparse representation models and deep-learning models. As dictionary learning focuses on learning the basics and features through matrix factorization methods, deep learning concentrates on extracting features by learning the weights or filters in the greedy layer. Dictionary learning has been very interesting in the field of representational learning. The idea of dictionary learning was used in image processing [1] and information discovery [2] by the researchers in the late 1990s. Two words, “dictionary learning” and “factoring of matrixes,” and their applications have not stopped in the

B

Azadeh Montazeri [email protected] Mahboubeh Shamsi [email protected] Rouhollah Dianat [email protected]

1

Department of Information Technology, Faculty of Computer, University of Qom, Qom, Iran

2

Department of Computer, Faculty of Computer and Electrical, Qom University of Technology, Qom, Iran

recent past. The objective was to learn an empirical database [3]. To decompose into a dictionary matrix and a function matrix, it must be a data matrix. Dictionary learning is currently very popular because of K-SVD [4]. K-SVD is an algorithm for disassembly into a compact basis and sparse coefficients of the training matrix. Although before K-SVD, the concept of sparse decomposition was introduced, much work was done with the advent of K-SVD in 2006. In both areas,