Recurrent Hopfield Networks
As mentioned in Sect. 2.2.3 , recurrent neural networks are those which the outputs of a neural layer can be fed back to the network inputs.
- PDF / 704,821 Bytes
- 17 Pages / 439.37 x 666.142 pts Page_size
- 19 Downloads / 157 Views
Recurrent Hopfield Networks
7.1
Introduction
As mentioned in Sect. 2.2.3, recurrent neural networks are those which the outputs of a neural layer can be fed back to the network inputs. The best example of recurrent networks can be assigned, of course, to the one devised by Hopfield (1982), which are most commonly known as the Hopfield network. This neural network architecture, with global feedback, has the following characteristics: • • • •
Dynamical behavior. Ability to memorize relationships. Possibility of storing information. Easy implementation in analogic hardware.
The work developed by Hopfield also contributed to trigger, at that time, a renewed and increased interest in artificial neural networks, which collaborated to the revival of important researches in the area, that were, somehow, stagnant since the publication of the book Perceptron by Minsky and Papert (1969). Indeed, Hopfield’s proposal addressed the existing links between recurrent neural architectures, dynamical systems and statistical physics, therefore, boosting curiosity of other areas of knowledge. His great triumph was to formulate several aspects that showed that recurrent neural networks with a single layer could be characterized by an energy function, which is related to the states of their dynamical behavior. Such architectures were also called as Ising models, a term used in analogy to ferromagnetism (Amit et al. 1985). Given the above background, the minimization of the energy function {E(x)} would take the network output to stable equilibrium points, which could be the desired solution for a particular problem. Figure 7.1 shows an illustration of stable and unstable equilibrium points.
© Springer International Publishing Switzerland 2017 I.N. da Silva et al., Artificial Neural Networks, DOI 10.1007/978-3-319-43162-8_7
139
140
7 Recurrent Hopfield Networks
Fig. 7.1 Illustration showing stable and unstable equilibrium points
It can be noticed that an energy function can have several equilibrium considered stable. During the network convergence process, from some initial states, it is a tendency that these states always move toward one of the stable points (fixed points). Besides the remarkable associative memories (Sect. 7.4), the main applications related to Hopfield networks are concentrated in the area of system optimization, such as dynamic programming (Silva et al. 2001; Wang 2004), linear programming (Malek and Yari 2005; Tank and Hopfield 1986), nonlinear constrained optimization (Silva et al. 2007; Xia and Wang 2004), and combinatorial optimization (Atencia et al. 2005; Hopfield and Tank 1985).
7.2
Operating Principles of the Hopfield Network
According to Fig. 7.2, the originally proposed Hopfield network is constituted of a single layer, in which all neurons are completely connected, this is, all network neurons are connected to all others and to itself (all network outputs are fed back to all network inputs). Since the Hopfield network is composed of a single layer of neurons, the same terminology usually adopted in the liter
Data Loading...