Perceptual image quality using dual generative adversarial network
- PDF / 1,652,883 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 61 Downloads / 253 Views
(0123456789().,-volV)(0123456789(). ,- volV)
DEEP LEARNING APPROACHES FOR REALTIME IMAGE SUPER RESOLUTION (DLRSR)
Perceptual image quality using dual generative adversarial network Masoumeh Zareapoor1
•
Huiyu Zhou2 • Jie Yang1
Received: 31 December 2018 / Accepted: 9 May 2019 Ó Springer-Verlag London Ltd., part of Springer Nature 2019
Abstract Generative adversarial networks have received a remarkable success in many computer vision applications for their ability to learn from complex data distribution. In particular, they are capable to generate realistic images from latent space with a simple and intuitive structure. The main focus of existing models has been improving the performance; however, there is a little attention to make a robust model. In this paper, we investigate solutions to the super-resolution problems—in particular perceptual quality—by proposing a robust GAN. The proposed model unlike the standard GAN employs two generators and two discriminators in which, a discriminator determines that the samples are from real data or generated one, while another discriminator acts as classifier to return the wrong samples to its corresponding generators. Generators learn a mixture of many distributions from prior to the complex distribution. This new methodology is trained with the feature matching loss and allows us to return the wrong samples to the corresponding generators, in order to regenerate the real-look samples. Experimental results in various datasets show the superiority of the proposed model compared to the state of the art methods. Keywords Image processing Perceptual quality Data distribution Generative adversarial network Classification
1 Introduction Image super-resolution is a technique that attracts much attention and progress in recent years. Despite the great progress and achievements, still, there is no unique solution exists, in particular for high magnification ratios. Each pixel loss which used by the existence approaches does not properly capture perceptual variances between output and input images [1, 2]. Thus, for the high upscaling factor (i.e., scale factor 4 or more), it is difficult to recover the high frequencies details in the images. Generative adversarial & Jie Yang [email protected] Masoumeh Zareapoor [email protected] Huiyu Zhou [email protected] 1
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
2
Department of Informatics, University of Leicester, Leicester, UK
network (GANs) is a conglomerate of deep learning and generative model that is proposed by Goodfellow et al. [3]. GANs models are known to produce realistic samples from latent space in a simple manner. In their original setting, they employ two neural networks based on adversarial training in a minimax game. Generator G is trained to produce fake samples from a noise space, whereas the discriminator learns how to make difference between fake (generated samples) and real (true data) samples. Since the advent of GANs, many works
Data Loading...