Track color space-time interest points in video
- PDF / 3,906,056 Bytes
- 15 Pages / 439.37 x 666.142 pts Page_size
- 26 Downloads / 152 Views
Track color space-time interest points in video I. Bellamine 1 & H. Silkan 1 & A. Tmiri 1 Received: 27 September 2019 / Revised: 14 April 2020 / Accepted: 5 May 2020 # Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract
Color Space-Time Interest Points (CSTIP) are among all the interesting low-level features which can be extracted from videos; they provide an efficient characterization of moving objects. The CSTIP are simple and can be used for video stabilization, camera motion estimation, and object tracking. In this paper, we show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for tracking. To increase the robustness of CSTIP features extraction, we suggest a pre-processing step which is based on a Color Video Decomposition and can decompose the input images into a dynamic color texture and structure components. We compute the new Color Space Time Interest Points (CSTIP) associated to the dynamic color texture components by using the proposed algorithm of the detection of Color Space- Time Interest Points. The point tracker object tracks a set of Color Space-Time Interest Points using the robust Zero-Mean Normalized Cross-Correlation (ZNCC), feature-tracking algorithm. Experimental results are obtained from very different types of videos, namely sport videos and animation movies. Keywords Color space time interest points . Color video decomposition . Tracking . Zero-mean normalized cross-correlation
* I. Bellamine [email protected] H. Silkan [email protected] A. Tmiri [email protected]
1
LAROSERI, Department of computer science, Chouaïb Doukkali University Faculty of Sciences, 24000 El Jadida, Morocco
Multimedia Tools and Applications
1 Introduction Motion analysis is a very active research area, which includes a number of problems: segmentation [16, 26], tracking [10–12, 25], human action recognition [14, 24] and motion detection [7, 17, 20, 37, 44]. To detect the moving objects in an image sequence is a very important low-level task for many computer vision applications [13, 30], such as video surveillance, traffic monitoring, video indexing, recognition of gestures, analysis of sport-events, mobile robotics and study of the behavior of objects (people, animals, vehicles, etc. ...). In the literature, the notion of Space-Time Interest Points (STIP) was especially interesting because they focus information initially contained in thousands of pixels on a few specific points which can be related to spatiotemporal events in an image [20]. STIP are among all the interesting low-level features which can be extracted from videos. Laptev and Lindeberg [20] were the first who proposed STIP for action recognition, by introducing a space-time extension of the popular Harris detector [18]. They detect regions having high intensity variation in both space and time as spatio-temporal corners. The Harris3D detector usually suffers from sparse STIP detection [20]. Later, several other methods for detecti
Data Loading...