Radial Basis Function Networks
Radial Basis Function networks, commonly known as RBF, can also be employed in almost every kind of problems solved by MLPs, including those involving curve fitting and pattern classification.
- PDF / 1,289,486 Bytes
- 22 Pages / 439.37 x 666.142 pts Page_size
- 45 Downloads / 229 Views
Radial Basis Function Networks
6.1
Introduction
Radial Basis Function networks, commonly known as RBF, can also be employed in almost every kind of problems solved by MLPs, including those involving curve fitting and pattern classification. Different from MLP networks, which can be composed of several intermediate layers, the RBF typical structure is composed of only one intermediate layer, in which the activation function is Gaussian, as illustrated by Fig. 6.1. One of the main particularities of the RBF networks is the training strategy used for adjusting the weights of their both neural layers, which will be presented in detail in the next section. As shown in Fig. 6.1, another distinguishing feature of this network architecture is the activation function used by the neurons of the intermediate layer, which is always a radial basis function such as the Gaussian. According to the classification presented in Chap. 2, RBF networks also belong to the multiple-layer feedforward architecture, whose training is supervised. From Fig. 6.1, it is possible to verify that the information that flows within its structure begins at the input layer, propagates to the intermediate layer (neurons with Gaussian activation function), and ends at the output neural layer (neurons with linear activation function).
6.2
Training Process of the RBF Network
The working principle of RBF networks is also similar to the principles of MLP networks, in which each input {xi}, representing the signals from the application, will be propagated to the respective intermediate layer in the direction of the output layer. However, different from the MLP, the training strategy of the RBF is composed of two very distinct steps or stages. The first stage, associated with adjusting the weights of the neurons in the intermediate layer, adopts a self-organized learning © Springer International Publishing Switzerland 2017 I.N. da Silva et al., Artificial Neural Networks, DOI 10.1007/978-3-319-43162-8_6
117
118
6 Radial Basis Function Networks
Fig. 6.1 Typical configuration of an RBF network
method (unsupervised), which depends only on the features of the input data. This adjustment is directly related to the allocation of the radial basis functions. On the other hand, the second stage, related to the weight adjustment of the neurons in the output layer, uses a learning criterion similar to the criterion employed in the last layer of the MLP, that is, the generalized delta rule. Moreover, contrary to the MLP networks, the training process begins with the neurons from the intermediate layer and ends with the neurons of the output layer.
6.2.1
Adjustment of the Neurons from the Intermediate Layer (Stage I)
As mentioned before, the neurons belonging to the intermediate layer of the RBF are composed of activation functions with radial basis, being the Gaussian one of the most used. The expression that defines a Gaussian activation function is given by: gðuÞ ¼ e
ðucÞ2 2r2
;
ð6:1Þ
where c defines the center of the Gaussian function and r2 denotes its va
Data Loading...