Rain Streaks and Snowflakes Removal for Video Sequences via Motion Compensation and Matrix Completion
- PDF / 4,269,486 Bytes
- 20 Pages / 595.276 x 790.866 pts Page_size
- 112 Downloads / 188 Views
ORIGINAL RESEARCH
Rain Streaks and Snowflakes Removal for Video Sequences via Motion Compensation and Matrix Completion Yutong Zhou1 · Nobutaka Shimada1 Received: 31 March 2020 / Accepted: 14 September 2020 © Springer Nature Singapore Pte Ltd 2020
Abstract Image and video deraining tasks aim to reconstruct original scenes, from which human vision and computer vision systems can better identify objects and more details present in images and video sequences. This paper proposes a three-step method to detect and remove rain streaks, even snowflakes from great majority video sequences, using motion compensation and low-rank matrix completion method. Firstly, we adopt the optical flow estimation between consecutive frames to detect the motion of rain streaks. We then employ the online dictionary learning for sparse representation technique, and SVM classifier to eliminate parts that are not rain streaks. Finally, we reconstruct the video sequence by using low-rank matrix completion techniques. In particular, by introducing the image dehazing network(GCANet) to our proposed method, the heavy rain caused dense rain accumulation and blurry phenomenon can be worked out well. The experimental results demonstrate the proposed algorithm and perform qualitatively and quantitatively better in several image quality metrics, boosting the best published PSNR metric by 4.47%, 6.05% on two static video sequences and 12.13% on a more challenging dynamic video sequence. In addition, to demonstrate the generality of the proposed method, we further apply it to two challenge tasks, which also achieves state-of-the-art performance. Keywords Rain Streaks and Snowflakes removal · Motion compensation · Sparse representation technique · SVM · Block matching estimation
Introduction Outdoor vision-based systems rely more and more on computer vision information and widely applied in intelligent transportation, public safety, sports performance analysis, etc. However, images and videos are vulnerable to be influenced by adverse weather conditions [18], which not only greatly reduce the quality of images but also interfere with nearby pixels [24], affect the effectiveness This article is part of the topical collection “Machine Learning in Pattern Analysis” guest edited by Reinhard Klette, Brendan McCane, Gabriella Sanniti di Baja, Palaiahnakote Shivakumara and Liang Wang. * Yutong Zhou [email protected] Nobutaka Shimada [email protected] 1
College of Information Science and Engineering, Ritsumeikan University, 1‑1‑1 Noji‑higashi, Kusatsu, Shiga 525‑8577, Japan
of computer vision algorithms, such as navigation applications [5], object detection [16], object tracking [10, 57], object recognition [40], scene analysis [23, 45], person re-identification [3], event detection [58], video-surveillance systems [5] and other fields. Based on their physical characteristics, the adverse weather conditions can be generally classified into steady systems(fog, sandstorms, haze, etc) and dynamic systems(rain, snow, hail, etc) [41]. For
Data Loading...