Moving target extraction and background reconstruction algorithm
- PDF / 2,075,864 Bytes
- 9 Pages / 595.276 x 790.866 pts Page_size
- 18 Downloads / 230 Views
ORIGINAL RESEARCH
Moving target extraction and background reconstruction algorithm Shi Qiu1 · Xuemei Li2 Received: 24 February 2020 / Accepted: 12 October 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract It is difficult for the computer to distinguish the target from the background due to the long-time static of the target after moving. A new moving target detection and background reconstruction algorithm is proposed and is applied into the RGB video for the first time. Firstly, the proposed algorithm builds a model from the time dimension to extract the changed region. Then, it combines with the space dimension information to completely extract the moving target. The spatiotemporal correlation model is established to realize the construction of pure background. The experimental results show that the proposed algorithm can effectively reconstruct the background and the recognition rate of moving target is high. Keywords Moving target extraction · Background reconstruction · RGB · Time dimension · Space dimension
1 Introduction Video surveillance is a technology of shooting specific areas through fixed cameras, and an intuitive and effective way to record and understand scene (Appathurai et al. 2020). With the colorization, high-definition and low price of cameras, the video surveillance system has become more and more integrated into people’s lives. Therefore, based on surveillance video, computer vision technology was born (Petrov et al. 2018). While the computer is processing the video surveillance, two key technologies mentioned (Qiu et al. 2019a, b) are the extraction of moving targets and the background reconstruction. At present, the algorithms of moving target extraction and background reconstruction in surveillance video include: Yang (Yang et al. 2004) proposes a multi-scale filtering framework to extract moving targets in real time. Campbell et al. (2004) determines the position of the moving target according to the light flow method. Zhang et al. (2004) uses a specific threshold to segment images according to the difference between frames. Liu * Xuemei Li [email protected] 1
Key Laboratory of Spectral Imaging Technology CAS, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an, People’s Republic of China
Shanghai Jiao Tong University, Shang hai, People’s Republic of China
2
et al. (2005) uses the Level Set to extract image boundaries. Weng et al. (2006) establishes Kalman filtering to predict the direction of moving target. Liu et al. (2006) uses active contour model to constrain optical flow to achieve target segmentation. Zhan et al. (2007) takes gray and boundary information into overall consideration to construct segmentation models. Unnikrishnan et al. (2007) sets up the Markov random field to segment images. Chen et al. (2008) uses the regional growth method to extract moving targets. Carmona et al. (2008) integrates the prior information into target extraction model. Tomás et al. (2009) integrates the morphology and the pixel
Data Loading...