Video-to-Video Dynamic Super-Resolution for Grayscale and Color Sequences
- PDF / 2,072,451 Bytes
- 15 Pages / 600.03 x 792 pts Page_size
- 39 Downloads / 191 Views
Video-to-Video Dynamic Super-Resolution for Grayscale and Color Sequences Sina Farsiu,1 Michael Elad,2 and Peyman Milanfar1 1 Electrical
Engineering Department, University of California Santa Cruz, Santa Cruz, CA 95064, USA Science Department, Technion – Israel Institute of Technology, Haifa 32000, Israel
2 Computer
Received 17 December 2004; Revised 10 March 2005; Accepted 15 March 2005 We address the dynamic super-resolution (SR) problem of reconstructing a high-quality set of monochromatic or color superresolved images from low-quality monochromatic, color, or mosaiced frames. Our approach includes a joint method for simultaneous SR, deblurring, and demosaicing, this way taking into account practical color measurements encountered in video sequences. For the case of translational motion and common space-invariant blur, the proposed method is based on a very fast and memory efficient approximation of the Kalman filter (KF). Experimental results on both simulated and real data are supplied, demonstrating the presented algorithms, and their strength. Copyright © 2006 Hindawi Publishing Corporation. All rights reserved.
1.
INTRODUCTION
Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. While higherquality images may result from more expensive imaging systems, often we wish to increase the resolution of images previously captured under nonideal situations. For instance, enhancing the quality of a video sequence captured by surveillance cameras in a crime scene is an example of these situations. The basic idea behind SR is the fusion of a sequence of low-resolution (LR) noisy blurred images to produce a higher-resolution image. Early works on SR showed that it is the aliasing effects in the LR images that enable the recovery of the high-resolution (HR) fused image, provided that a relative subpixel motion exists between the undersampled input images [1]. However, in contrast to the clean but practically naive frequency-domain description of SR in that early work, in general, SR is a computationally complex and numerically ill-posed problem in many instances [2]. In recent years, more sophisticated SR methods have been developed (see [2–12] as representative works). In this work, we consider SR applied on an image sequence, producing a sequence of SR images. At time point t, we desire an SR result that fuses the causal images at times t, t − 1, . . . , 1. The natural approach, as most existing works so far suggest, is to apply the regular SR on this set of images with the tth frame as a reference, produce the SR output, and repeat this process all over again per each temporal point. We
refer to this as the static SR method, since it does not exploit the temporal evolution of the process. In contrast, in this work, we adopt a dynamic point of view, as introduced in [13, 14], in developing the new SR solution. The memory and computational requirements for the static process are so taxing as to preclude its direct application to the dynamic case, without highly efficie
Data Loading...