On the locality of the natural gradient for learning in deep Bayesian networks

  • PDF / 1,441,676 Bytes
  • 49 Pages / 439.37 x 666.142 pts Page_size
  • 15 Downloads / 177 Views

DOWNLOAD

REPORT


On the locality of the natural gradient for learning in deep Bayesian networks Nihat Ay1,2,3 Received: 29 June 2020 / Revised: 29 October 2020 / Accepted: 31 October 2020 © The Author(s) 2020

Abstract We study the natural gradient method for learning in deep Bayesian networks, including neural networks. There are two natural geometries associated with such learning systems consisting of visible and hidden units. One geometry is related to the full system, the other one to the visible sub-system. These two geometries imply different natural gradients. In a first step, we demonstrate a great simplification of the natural gradient with respect to the first geometry, due to locality properties of the Fisher information matrix. This simplification does not directly translate to a corresponding simplification with respect to the second geometry. We develop the theory for studying the relation between the two versions of the natural gradient and outline a method for the simplification of the natural gradient with respect to the second geometry based on the first one. This method suggests to incorporate a recognition model as an auxiliary model for the efficient application of the natural gradient method in deep networks. Keywords Natural gradient · Fisher–Rao metric · Deep learning · Helmholtz machines · Wake–sleep algorithm

Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Locality of deep learning in Bayesian and neural networks . . . . . . . . 3 Gradients on full versus coarse grained models . . . . . . . . . . . . . . 4 Conclusions: A natural gradient perspective of the wake–sleep algorithm Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

B

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

Nihat Ay [email protected]

1

Max Planck Institute for Mathematics in the Sciences, 04103 Leipzig, Germany

2

Leipzig University, 04109 Leipzig, Germany

3

Santa Fe Institute, Santa Fe, NM 87501, USA

123

Information Geometry

1 Introduction 1.1 The natural gradient method Within the last decade, deep artificial neural networks have led to unexpected successes of machine learning in a large number of applications [15]. One important direction of research within the field of deep learning is based on the natural gradient method from information geometry [3,4,8]. It has been proposed by Amari [2] as a gradient method that is invariant with respect to coordinate transformations. This method turns out to be extremely efficient within various fields of artificial intelligence and machine learning, including neural networks [2], reinforcement learning [7,19], and robotics [27]. It is known to overcome several problems of traditional gradient methods. Most importantly, the natural gradient method avoids the so-called plateau problem, and it is