Non-negative Matrix Semi-tensor Factorization for Image Feature Extraction and Clustering
Non-negative Matrix Factorization (NMF) has been frequently applied to image feature extraction and clustering. Especially in image clustering tasks, it can achieve the similar or better performance than most of the matrix factorization algorithms due to
- PDF / 347,888 Bytes
- 13 Pages / 439.37 x 666.142 pts Page_size
- 85 Downloads / 243 Views
Abstract Non-negative Matrix Factorization (NMF) has been frequently applied to image feature extraction and clustering. Especially in image clustering tasks, it can achieve the similar or better performance than most of the matrix factorization algorithms due to its parts-based representations in the brain. However, the features extracted by NMF are not sparse and localized enough and the error of factorization is not small enough. Semi-tensor product of matrices (STP) is a novel operation of matrix multiplication, it is a generalization of the conventional matrix product for allowing the dimensions of factor matrices to be unequal. STP can manage the data hierarchically and the inverse process of STP can separate the data hierarchically. Based on this character of STP, we propose the Non-Negative Matrix Semi-Tensor Factorization (NMSTF). In this algorithm, we use the inverse process of Semi-Tensor Product of matrices for non-negative matrix factorization. This algorithm effectively optimizes the above two problems in NMF. While achieving similar even better performance on image clustering tasks, the size of features extracted by STNMF is at least 50 % smaller than the ones’ extracted by NMF and the error of factorization reduces 30 % in average. Keywords Non-negative Matrix Semi-tensor Factorization ⋅ Non-negative Matrix Factorization ⋅ Semi-tensor Product of matrices ⋅ Image feature extraction ⋅ Clustering
1 Introduction With the development of computer science, robotics, and artificial intelligence in recent years, computer vision has higher requirements than before. Though the input data always have high dimension, this question makes learning from example C. Ben ⋅ Z. Wang (✉) ⋅ X. Yang ⋅ X. Yi State Key Laboratory of High Performance Computing (HPCL), College of Computer, National University of Defense Technology, 137 Yanwachi Street, Changsha 410073, Hunan, People’s Republic of China e-mail: [email protected] © Springer Science+Business Media Singapore 2016 Y. Jia et al. (eds.), Proceedings of 2016 Chinese Intelligent Systems Conference, Lecture Notes in Electrical Engineering 404, DOI 10.1007/978-981-10-2338-5_37
381
382
C. Ben et al.
infeasible [1]. The techniques for matrix factorization can effectively solve the problem of high dimension. The common matrix factorization techniques include Triangular Factorization, QR Factorization, Vector Quantization and Singular Value Decomposition (SVD). They aim to find two or more lower dimension matrices and their product is well approximated to the original one. Among these algorithms, SVD has been applied to various applications such as face recognition tasks [2]. However, because negative numbers are lack of practical meanings in image, the features extracted by these algorithms are difficult to understand. Previous studies have shown that there is psychological and physiological evidence for parts-based representations in the brain [3, 4], and certain computational theories of object clustering rely on such representations. The Non-negative Matrix Factoriz
Data Loading...