Stimulus-induced sequential activity in supervisely trained recurrent networks of firing rate neurons

  • PDF / 2,520,629 Bytes
  • 11 Pages / 547.087 x 737.008 pts Page_size
  • 34 Downloads / 160 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

Stimulus-induced sequential activity in supervisely trained recurrent networks of firing rate neurons Oleg V. Maslennikov

· Vladimir I. Nekorkin

Received: 27 March 2020 / Accepted: 24 June 2020 © Springer Nature B.V. 2020

Abstract In this work, we consider recurrent neural networks of firing rate neurons supervisely trained to generate multidimensional sequences of given configurations. We study dynamical objects in the network multidimensional phase space underlying successfully trained outputs and analyze spatiotemporal neural activity and its features in three cases. First, we consider autonomous generation of complex sequences by output units driven by a recurrent network. Second, we study how input pulses can trigger different output units. Third, we explore the case where input pulses allow us to switch between different sequential activities of output units. Keywords Recurrent neural network · Supervised learning · Reservoir computing · Sequential activity

1 Introduction Recurrent neural networks are, on the one hand, a class of networked models used in computational neuroStudying autonomous dynamics in this work was carried out as part of the state assignment of the IAP RAS, project No. 0035-2019-0011. Studying stimulus-driven dynamics was supported by the Russian Science Foundation, Project No. 19-72-00112. O. V. Maslennikov (B)· V. I. Nekorkin Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia e-mail: [email protected]

science for explaining biological phenomena of neural activity [1–4]. On the other hand, they are a tool for solving applied problems in machine learning [5,6]. These networks are characterized by the fact that each neuron may receive inputs from any other neuron. Therefore, not only external stimuli drive changes in the network activity but self-sustained ongoing dynamics can be itself rather complicated similar to the spontaneous activity observed in brain. Computations performed by recurrent neural networks are thus intrinsically embedded in time unlike those carried out by feed-forward neural networks. Studying them by methods and approaches of dynamical systems theory can uncover dynamical mechanisms underlying prototypical signal transformations and functions observed in brain neural networks [4,7–9]. Moreover, the structure of the latter within higher regions such as cortical columns is mostly recurrent [10]. Therefore, there is a large amount of the literature devoted to different aspects of recurrent neural networks. In computational neuroscience, recurrent neural networks have been widely utilized as models for information processing related to the formation of associative [11–13] and working memory [14–17]. Such complex emergent phenomena in brain as network oscillations [18– 20] and network multistability [21] were also described and explained in terms of dynamics of recurrent neural networks. In machine learning, there are many applications of recurrent neural networks which involve sequential inputs, including speech [5