Neurally plausible mechanisms for learning selective and invariant representations
- PDF / 1,796,974 Bytes
- 15 Pages / 595.276 x 793.701 pts Page_size
- 0 Downloads / 186 Views
(2020) 10:12
SHORT REPORT
Open Access
Neurally plausible mechanisms for learning selective and invariant representations Fabio Anselmi1,2,3* , Ankit Patel1,4 and Lorenzo Rosasco2 *
Correspondence: [email protected] Center for Neuroscience and Artificial Intelligence Department of Neuroscience, Baylor College of Medicine, Baylor Plaza, 77030 Houston, USA 2 Laboratory for Computational and Statistical Learning (LCSL), Istituto Italiano di Tecnologia, Genova, Via Dodecaneso, Genova, Italy Full list of author information is available at the end of the article 1
Abstract Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success—supervised learning and the backpropagation algorithm—are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations. Keywords: Invariance; Hebbian learning; Group theory
1 Context and purpose of the study How does the mammalian visual cortex build up object representations that are simultaneously selective for object identity and invariant to nuisance variation (e.g. changes in location, pose)? This is an old and challenging problem with a storied history of theoretical and practical attempts at solutions both in pattern recognition and computational neuroscience [1–12]. Much theoretical and experimental work [13–15] supports the hypothesis that most of the complexity of the object category recognition task is due to nuisance transformations such as pose, scale, and illumination. From this perspective, a natural property for a ventral stream representation to have is the ability to factor out tasknuisance variation (invariance) while still retaining task-relevant information (selectivity). How to build such an architecture? Hubel and Wiesel’s seminal work [16–18] in studying cat visual cortex suggests an architectura
Data Loading...