Stacked Auto-Encoders for Feature Extraction with Neural Networks
Auto-encoder plays an important role in the feature extraction of deep learning architecture. In this paper, we present several variants of stacked auto-encoders for feature extracting with neural networks. In fact, these stacked auto-encoders can serve a
- PDF / 2,802,896 Bytes
- 8 Pages / 439.37 x 666.142 pts Page_size
- 95 Downloads / 200 Views
Department of Information Science, School of Mathematical Sciences and LMAM, Peking University, Beijing 100871, China [email protected] 2 Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
Abstract. Auto-encoder plays an important role in the feature extraction of deep learning architecture. In this paper, we present several variants of stacked auto-encoders for feature extracting with neural networks. In fact, these stacked auto-encoders can serve as certain biologically plausible filters to extract effective features as the input to a particular neural network with a learning task. The experimental results on the real datasets demonstrate that the convolutional auto-encoders can help a supervised neural network to get the best performance of classification or recognition. Keywords: Auto-encoder · Feature extraction · Deep learning · Neural networks · Classification
1
Introduction
Feature extracting or learning, especially in a deep learning architecture, has been playing an important role in pattern recognition and machine learning. The aim of unsupervised feature learning is to detect and remove input redundancies, to extract generally useful features from unlabelled data, and to preserve only essential aspects of the data in robust and discriminative representations [1]. In the neural network architectures, unsupervised layers could be stacked to build deep hierarchies on top of each other [2]. For all layers in the hierarchy system, input layer activations feeds the next which are fed to the first layer. Deep architectures with being fine-tuned later by back-propagation can be trained to become classifiers in an unsupervised layer-wise fashion [3]. Most methods are based on the encoder-decoder paradigm [4] to avoid local minima and increase the networks performance stability in unsupervised initializations [5]. The input is first transformed into a typically lower-dimensional space expressed as encoder, and then expanded to reproduce the initial data expressed as decoder. Once a layer is trained, its code is fed to the next, to better model highly non-linear dependencies in the input. The focus of this approach is to build high-level, class-specific feature detectors. c Springer Nature Singapore Pte Ltd. 2016 M. Gong et al. (Eds.): BIC-TA 2016, Part I, CCIS 681, pp. 377–384, 2016. DOI: 10.1007/978-981-10-3611-8 31
378
S. Liu et al.
Much recent peering works in machine learning have focused on autoencoders which have taken center stage again in the deep architecture approach [5–10]. Auto-encoders are simple learning circuits which aim to transform inputs into outputs with the least possible amount of distortion, which are stacked and trained bottom up in an unsupervised fashion, followed by a supervised learning phase to train the top layer and fine-tune the entire architecture [11]. In a great many challenging classification and regression problems, these deep architectures have been shown to lead to state-of-the-art results. In this paper, we present the possible varian
Data Loading...