GADE: A Generative Adversarial Approach to Density Estimation and its Applications

  • PDF / 1,363,320 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 13 Downloads / 152 Views

DOWNLOAD

REPORT


GADE: A Generative Adversarial Approach to Density Estimation and its Applications M. Ehsan Abbasnejad1 · Javen Shi1 · Anton van den Hengel1 · Lingqiao Liu1 Received: 29 April 2019 / Accepted: 15 July 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Density estimation is a challenging unsupervised learning problem. Current maximum likelihood approaches for density estimation are either restrictive or incapable of producing high-quality samples. On the other hand, likelihood-free models such as generative adversarial networks, produce sharp samples without a density model. The lack of a density estimate limits the applications to which the sampled data can be put, however. We propose a generative adversarial density estimator (GADE), a density estimation approach that bridges the gap between the two. Allowing for a prior on the parameters of the model, we extend our density estimator to a Bayesian model where we can leverage the predictive variance to measure our confidence in the likelihood. Our experiments on challenging applications such as visual dialog or autonomous driving where the density and the confidence in predictions are crucial shows the effectiveness of our approach. Keywords Generative models · GANs · Flow-based generative models · Deep learning

1 Introduction Generative modelling is amongst the longest-standing problems in machine learning, and one that has been drawing increasing attention. This is at least partly due to the shortcomings of the predominant discriminative deep learningbased models. These shortcomings include the failure to generalise, a lack of robustness to data distribution changes, and the need for large volumes of training data. Deep generative models have been successful in addressing some of these shortcomings. In particular, deep maximum likelihood models such as deep Boltzmann machines (Salakhutdinov and Hinton 2009), variational autoencoders Communicated by Jun-Yan Zhu, Hongsheng Li, Eli Shechtman, Ming-Yu Liu, Jan Kautz, Antonio Torralba.

B

M. Ehsan Abbasnejad [email protected] Javen Shi [email protected] Anton van den Hengel [email protected] Lingqiao Liu [email protected]

1

Australian Institute for Machine Learning, The University of Adelaide, Adelaide, Australia

(VAEs) (Kingma and Welling 2014), autoregressive models (Gregor et al. 2014; Oord et al. 2016), real non-volume preserving transformations (Dinh et al. 2016) etc.have demonstrated an impressive ability to model complex densities. Likelihood-free approaches (Gutmann et al. 2018; Goodfellow et al. 2014) such as generative adversarial networks (GANs) (Goodfellow et al. 2014) have outperformed previous deep generative models in their ability to model complex distributions. In image-based problems they have shown a particular ability to consistently generate sharp and realistic looking samples (Karras et al. 2017; Zhang et al. 2017; Nguyen et al. 2017). GANs are one of the implicit generative models wherein density is not expl