Video Enhancement Using Adaptive Spatio-Temporal Connective Filter and Piecewise Mapping
- PDF / 7,982,434 Bytes
- 13 Pages / 600.05 x 792 pts Page_size
- 42 Downloads / 164 Views
Research Article Video Enhancement Using Adaptive Spatio-Temporal Connective Filter and Piecewise Mapping Chao Wang, Li-Feng Sun, Bo Yang, Yi-Ming liu, and Shi-Qiang Yang Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China Correspondence should be addressed to Chao Wang, [email protected] Received 28 August 2007; Accepted 3 April 2008 Recommended by Bernard Besserer This paper presents a novel video enhancement system based on an adaptive spatio-temporal connective (ASTC) noise filter and an adaptive piecewise mapping function (APMF). For ill-exposed videos or those with much noise, we first introduce a novel local image statistic to identify impulse noise pixels, and then incorporate it into the classical bilateral filter to form ASTC, aiming to reduce the mixture of the most two common types of noises—Gaussian and impulse noises in spatial and temporal directions. After noise removal, we enhance the video contrast with APMF based on the statistical information of frame segmentation results. The experiment results demonstrate that, for diverse low-quality videos corrupted by mixed noise, underexposure, overexposure, or any mixture of the above, the proposed system can automatically produce satisfactory results. Copyright © 2008 Chao Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1.
INTRODUCTION
Driven by rapid development of digital devices, camcorders and cameras are no longer used only for professional work, but step into a variety of application areas such as surveillance and home video making. While capturing videos become much easier, video defects, such as blocking, blur, noises, and contrast distortions, are often introduced by many uncontrollable factors: unprofessional video recording behaviors, information loss in video transmissions, undesirable environmental lighting, device defects, and so forth. As a result, there is an increasing demand for the technique— video enhancement, which aims at improving videos’ visual qualities, while endeavoring to repress different kinds of artifacts. In this paper, we focus on two most common defects: noises and contrast distortions. While some existing software have already provided noise removal and contrast enhancement functions, it is likely that most of them introduce artifacts and could not produce desirable results for a broad variety of videos. Until now, video enhancement still remains a challenging research problem in filtering noises as well as enhancing contrast. The natural noises in videos are quite complex; yet, fortunately, most noises can be represented using two models: additive Gaussian noise and impulse noise [1, 2].
Additive Gaussian noise generally assumes zero-mean Gaussian distribution and is usually introduced during video acquisition, while impulse noise assumes uniform or discrete distribution and is often caused by
Data Loading...