Hamming Star-Convexity Packing in Information Storage

  • PDF / 478,018 Bytes
  • 18 Pages / 600.05 x 792 pts Page_size
  • 75 Downloads / 219 Views

DOWNLOAD

REPORT


Research Article Hamming Star-Convexity Packing in Information Storage Mau-Hsiang Shih and Feng-Sheng Tsai Department of Mathematics, National Taiwan Normal University, 88 Section 4, Ting Chou Road, Taipei 11677, Taiwan Correspondence should be addressed to Feng-Sheng Tsai, [email protected] Received 8 December 2010; Accepted 16 December 2010 Academic Editor: Jen Chih Yao Copyright q 2011 M.-H. Shih and F.-S. Tsai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A major puzzle in neural networks is understanding the information encoding principles that implement the functions of the brain systems. Population coding in neurons and plastic changes in synapses are two important subjects in attempts to explore such principles. This forms the basis of modern theory of neuroscience concerning self-organization and associative memory. Here we wish to suggest an information storage scheme based on the dynamics of evolutionary neural networks, essentially reflecting the meta-complication of the dynamical changes of neurons as well as plastic changes of synapses. The information storage scheme may lead to the development of a complete description of all the equilibrium states fixed points of Hopfield networks, a spacefilling network that weaves the intricate structure of Hamming star-convexity, and a plasticity regime that encodes information based on algorithmic Hebbian synaptic plasticity.

1. Introduction The study of memory includes two important components: the storage component of memory and the systems component of memory 1, 2. The first is concerned with exploring the molecular mechanisms whereby memory is stored, whereas the second is concerned with analyzing the organizing principles that mediate brain systems to encode, store, and retrieve memory. The first neurophysiological description about the systems component of memory was proposed by Hebb 3. His postulate reveals a principle of learning, which is often summarized as “the connections between neurons are strengthened when they fire simultaneously.” The Hebbian concept stimulates an intensive effort to promote the building of associative memory models of the brain 4–9. Also, it leads to the development of a LAMINART model matching in laminar visual cortical circuitry 10, 11, the development of

2

Fixed Point Theory and Applications

an Ising model used in statistical physics 12–15, and the study of constrained optimization problems such as the famous traveling salesman problem 16. However, since it was initiated by Kohonen and Anderson in 1972, associative memory has remained widely open in neural networks 17–21. It generally includes questions concerning a description of collective dynamics and computing with attractors in neural networks. Hence the central question 22: “given an arbitrary set of prototypes of 01-strings of length n, is there any recurrent network s