Dimensionality Reduction and Sparse Representation
Image representation is a fundamental issue in signal processing, pattern recognition, and computer vision. An efficient image representation can lead to the development of effective algorithms for the interpretation of images. Since Marr proposed the fun
- PDF / 705,182 Bytes
- 25 Pages / 439.37 x 666.142 pts Page_size
- 72 Downloads / 259 Views
Dimensionality Reduction and Sparse Representation
Nature does not hurry, yet everything is accomplished. —Lao Tzu
Image representation is a fundamental issue in signal processing, pattern recognition, and computer vision. An efficient image representation can lead to the development of effective algorithms for the interpretation of images. Since Marr proposed the fundamental principle of the primary sketch concept of a scene [28], many image representations have been developed based on this concept. The primary sketch refers to the edges, lines, regions, and others in an image. These are also called parts of objects. These are characteristic features which can be extracted from an image by transforming an image from the pixel-level to a higher level representation for image understanding. This step is usually considered as a low-level transformation. Many different techniques of transformation have been developed in the literature for signal and image representation. Those techniques can be used for image transformations which will result in efficient representation of characteristic features (or called intrinsic dimension). Dimensionality reduction (DR) and sparse representation (SR) are two representative schemes frequently used in the transform for reducing the dimension of a dataset. These transformations include principle component analysis (PCA), singular value decomposition (SVD), non-negative matrix factorization (NMF), and sparse coding (SC). The study of the mammal brain suggests that only a small number of active neurons encode sensory information at any given point [26, 30]. This finding has led to the rapid development of sparse coding which refers to a small number of nonzero entries in a feature vector. Due to many zeros or small nonzeros in a feature vector, it is called the sparsity of the feature vector. Hence, it is important to represent the sparsity for the dataset by eliminating the data redundancy for applications [6, 34]. The ultimate goal is to have a compact, efficient, and compressed representation of the input data.
© Springer Nature Switzerland AG 2019 C.-C. Hung et al., Image Texture Analysis, https://doi.org/10.1007/978-3-030-13773-1_4
103
104
4
Dimensionality Reduction and Sparse Representation
In the following sections, we will introduce the Hughes effect in the classification of images. Due to this effect, dimensionality reduction is frequently used to solve this problem. We will then present the basis vector concept from linear algebra. Based on this concept, the principle component analysis (PCA), singular value decomposition (SVD), non-negative matrix factorization (NMF), and sparse coding (SC) will be introduced. The PCA is one of the dimensionality reduction methods developed earlier and is widely used in pattern recognition and remote sensing image interpretation. Please note that since a basis image can be represented as a basis vector, we use both terms interchangeably in the following discussions.
4.1
The Hughes Effect and Dimensionality Reduction (DR)
In a high-di
Data Loading...