Research on Hopfield Neural Network in Associative Memory Storage

In this paper an improved Hopfield neural network algorithm was given, the applicable conditions of Hopfield neural network models of memory storage is greater than the storage capacity of the perfect search. To overcome the following limitations of Hopfi

  • PDF / 194,945 Bytes
  • 6 Pages / 439.37 x 666.142 pts Page_size
  • 71 Downloads / 182 Views

DOWNLOAD

REPORT


Research on Hopfield Neural Network in Associative Memory Storage Li Tu and Chi Zhang

Abstract In this paper an improved Hopfield neural network algorithm was given, the applicable conditions of Hopfield neural network models of memory storage is greater than the storage capacity of the perfect search. To overcome the following limitations of Hopfield neural network algorithm: It is not the ideal storage capacity problem, it lack of inter-related, the Pattern of memory and sometimes cannot be too close to the Hopfield neural network algorithm recal1 a stored memory pattern mode1 is not any one model, but it drop into a “pseudo-state”. Keywords Hopfield neural network • Associative memory • Improved algorithm for associative memory

155.1

Introduction

A discrete Hopfield neural network is made of N fully connected neurons. The output of neurons is discrete values 0 (or 1) and 1, they represent two status of neurons: the inhibition and activation status [1]. These neurons of discrete Hopfield neural network connect each other, their connection strength is described with weights, a N * N-dimensional matrix is called the weight matrix. Each neuron has a threshold. An only N-dimensional discrete Hopfield neural network is defined with a weight matrix and a vector [2, 3]. Storage capacity, stability, attract radius, convergence time, these four indicators are used to describe the performance of neural networks. From a structural point of view, the general discrete Hopfield neural network is all connected, it contains positive and negative feedback information, there is a lot of feedback loop or loops in it [4, 5]. All interconnected artificial neural networks have the highest complexity of the structure; it can reflect

L. Tu (*) • C. Zhang Department of Computer Science, Hunan City University, Yiyang, Hunan 413000, China e-mail: [email protected]; [email protected] S. Zhong (ed.), Proceedings of the 2012 International Conference on Cybernetics 1211 and Informatics, Lecture Notes in Electrical Engineering 163, DOI 10.1007/978-1-4614-3872-4_155, # Springer Science+Business Media New York 2013

1212

L. Tu and C. Zhang

Fig. 155.1 Layer structure of a fully connected Hopfield network

the biological complex structural features of the nervous system. The Layer structure of a fully connected Hopfield network is shown as Fig. 155.1. Discrete Hopfield neural network is described as follows: There are n neurons in this Neural networks, any input of neuron i is represented by ui , and the output is represented by vi , they are functions of time, vi ðtÞ is known as the state at time t of neuron i. ui ðtÞ ¼

n X

wij vj ðtÞ þ bi

(155.1)

j¼1

j 6¼ i And bi is the deviation of neuron i. The output state of corresponding neuron i at time t is Function (155.2): vi ðt þ 1Þ ¼ f ðui ðtÞÞ

(155.2)

The active function f ðÞ can be a sign function Sgn(t) or a step function. When it is a sign function, the output of neurons is taken as discrete values 1 or 1, it is described by the following function: 8 n P > > wij vj ðtÞ þ bi  0 1;