Foundation of Deep Machine Learning in Neural Networks
This chapter introduces several basic neural network models, which are used as the foundation for the further development of deep machine learning in neural networks. The deep machine learning is a very different approach in terms of feature extraction co
- PDF / 1,843,592 Bytes
- 32 Pages / 439.37 x 666.142 pts Page_size
- 111 Downloads / 242 Views
Foundation of Deep Machine Learning in Neural Networks
Our greatest glory is not in never falling, but in rising every time we fall. —Confucius
This chapter introduces several basic neural network models, which are used as the foundation for the further development of deep machine learning in neural networks. The deep machine learning is a very different approach in terms of feature extraction compared with the traditional feature extraction methods. This conventional feature extraction method has been widely used in the pattern recognition approach. The deep machine learning in neural networks is to automatically “learn” the feature extractors, instead of using human knowledge to design and build feature extractors in the pattern recognition approach. We will describe some typical neural network models that have been successfully used in image and video analysis. One type of the neural networks introduced here is called supervised learning such as the feed-forward multi-layer neural networks, and the other type is called unsupervised learning such as the Kohonen model (also called self-organizing map (SOM)). Both types are widely used in visual recognition before the nurture of the deep machine learning in the convolutional neural networks (CNN). Specifically, the following models will be introduced: (1) the basic neuron model and perceptron, (2) the traditional feed-forward multi-layer neural networks using the backpropagation, (3) the Hopfield neural networks, (4) Boltzmann machines, (5) Restricted Boltzmann machines and Deep Belief Networks, (6) Self-organizing maps, and (7) the Cognitron and Neocognitron. Both Cognitron and Neocognitron are deep neural networks that can perform the self-organizing without any supervision. These models are the foundation for discussing texture classification by using deep neural networks models.
© Springer Nature Switzerland AG 2019 C.-C. Hung et al., Image Texture Analysis, https://doi.org/10.1007/978-3-030-13773-1_9
201
202
9.1
9
Foundation of Deep Machine Learning in Neural Networks
Neuron and Perceptron
The traditional artificial neural networks (ANNs) have become an essential part of machine learning in artificial intelligence. The ANN is characterized by three components, namely; the architecture, transfer function (also called squashing or activation functions), and learning algorithm. Many different types of ANNs have been proposed and developed in the literature. There are two types of ANNs which are widely used for the applications; one is called the supervised ANN such as the feed-forward multi-layer neural networks (FMNN) and the other is called the unsupervised ANN such as the self-organizing map (SOM). Hence, it is very often that the SOM is used as an unsupervised classifier and the FMNN is employed as a supervised classification algorithm. In analogy, this corresponds to unsupervised and supervised learning in pattern recognition and machine learning. In general, it is time consuming for training an ANN. It acts very fast during the testing phase once an ANN
Data Loading...