Toward Human-Level Massively-Parallel Neural Networks with Hodgkin-Huxley Neurons
This paper describes neural network algorithms and software that scale up to massively parallel computers. The neuron model used is the best available at this time, the Hodgkin-Huxley equations. Most massively parallel simulations use very simplified neur
- PDF / 439,488 Bytes
- 10 Pages / 439.37 x 666.142 pts Page_size
- 85 Downloads / 131 Views
Abstract. This paper describes neural network algorithms and software that scale up to massively parallel computers. The neuron model used is the best available at this time, the Hodgkin-Huxley equations. Most massively parallel simulations use very simplified neuron models, which cannot accurately simulate biological neurons and the wide variety of neuron types. Using C++ and MPI we can scale these networks to human-level sizes. Computers such as the Chinese TianHe computer are capable of human level neural networks. Keywords: Neural networks
Neurons Parallel Hodgkin-Huxley MPI
1 Introduction Artificial intelligence began in roughly 1956 at a conference at Dartmouth University. The participants, and many researchers after them, were clearly overly optimistic. As with many new technologies, the technology was oversold for decades. Computer processing power, however, has been doubling every two years thanks to Moore’s law. In the 1950’s one of the main computers was the IBM 701, which could do 16,000 adds/subtracts per second, or 2,000 multiples/divides per second. This is roughly a trillion times smaller than the human brain. As shown in Fig. 1, it is more on par with the C. Elegan worm, which is about 1 mm long and has 302 neurons and 6393 synapses [1]. Over a wide range of biological creatures, it is estimated [2, 3] that the number of synapses in biological systems can be modeled via: Synapses ¼ 3:7 Neurons1:32
ð1Þ
A cockroach has about a million neurons, and using the above formula has about 300 million synapses. A rough estimate is that each synapse can store 1–8 bits and can perform roughly 1–2 operations per second. Thus from these crude estimates the IBM 701 had performance about 10,000 times worse than a cockroach neural system. It is amazing that the term “artificial intelligence” (AI) was coined during this era of horribly low-powered computers. Not until about 1975 did we have a computer on the order of a cockroach, the Cray 1, which had a speed of roughly 160 megaflops. It is not surprising that AI by this time was not taken seriously except in science fiction. © Springer International Publishing Switzerland 2016 B. Steunebrink et al. (Eds.): AGI 2016, LNAI 9782, pp. 314–323, 2016. DOI: 10.1007/978-3-319-41649-6_32
Toward Human-Level Massively-Parallel Neural Networks
315
Fig. 1. Computers and biological systems speed and memory.
About 20 years later there was the ASCI Red computer with 9298 processors with a terabyte of memory and a speed of 1 teraflop. If this could have been harnessed for modeling a brain, it would have been on the order of a rat, which has about 200 million neurons. The five largest parallel computers that exist today (which aren’t classified) are shown in Table 1 [4]. The TianHe-2 computer in China has more than 3 million processor cores, 1 petabyte of memory, and a peak speed of 55 petaflops. Table 1. Top five computers in the world, (www.top500.org, Nov. 2015). Power Processor Peak Speed Memory Cores (PetaFlops) (PetaBytes) Required (1015) (MWatts) (1015) TianHe-2 (Chi
Data Loading...