Robust Real-Time Tracking for Visual Surveillance
- PDF / 4,935,240 Bytes
- 23 Pages / 600.03 x 792 pts Page_size
- 59 Downloads / 228 Views
Research Article Robust Real-Time Tracking for Visual Surveillance David Thirde,1 Mark Borg,1 Josep Aguilera,2 Horst Wildenauer,2 James Ferryman,1 and Martin Kampel2 1 School
of Systems Engineering, Computational Vision Group, The University of Reading, Reading RG6 6AY, UK Science Department, Pattern Recognition and Image Processing Group, Vienna University of Technology, 1040 Vienna, Austria
2 Computer
Received 21 October 2005; Revised 23 March 2006; Accepted 18 May 2006 Recommended by John Maccormick This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright © 2007 Hindawi Publishing Corporation. All rights reserved.
1.
INTRODUCTION
This paper describes work undertaken on the EU project AVITRACK. The main aim of this project is to automate the supervision of commercial aircraft servicing operations on the ground at airports (in bounded areas known as aprons). Figure 1 shows apron echo-40 at Toulouse Airport in France. The servicing operations are monitored from multiple cameras that are mounted on the airport building surrounding the apron area, each servicing operation is a complex 30minute routine involving the interaction between aircraft, people, vehicles, and equipment. The full AVITRACK system is presented in Figure 2. The focus of this paper is on the real-time tracking of the objects in scene, this tracking is performed in a decentralised multi-camera environment with overlapping fields of view between the cameras [1]. The output of this— the scene tracking module—is the predicted physical (i.e., real-world) objects in the monitored scene. These objects are subsequently passed (via a spatiotemporal coherency filter) to a scene understanding module where the activities within the scene are recognised. This result is fed— in real time—to apron managers at the airport. The modules communicate using the XML standard, which although inefficient allows the system to be efficiently integrated. It is imperative that this system must be capable of monitoring a dynamic environment over an extended period of time, and must operate in real time (defined as 12.5 FPS with resolution 720 × 576) on colour video
streams. More details of the complete system are given in [2]. The tracking of moving objects on the apron has previously been performed using a top-down model-based approach [3] although such methods are generally computationally intensive. O
Data Loading...