Spatio-temporal Background Models for Outdoor Surveillance
- PDF / 2,249,822 Bytes
- 11 Pages / 600 x 792 pts Page_size
- 4 Downloads / 191 Views
Spatio-temporal Background Models for Outdoor Surveillance Robert Pless Department of Computer Science and Engineering, Washington University in St. Louis, MO 63130, USA Email: [email protected] Received 2 January 2004; Revised 1 September 2004 Video surveillance in outdoor areas is hampered by consistent background motion which defeats systems that use motion to identify intruders. While algorithms exist for masking out regions with motion, a better approach is to develop a statistical model of the typical dynamic video appearance. This allows the detection of potential intruders even in front of trees and grass waving in the wind, waves across a lake, or cars moving past. In this paper we present a general framework for the identification of anomalies in video, and a comparison of statistical models that characterize the local video dynamics at each pixel neighborhood. A real-time implementation of these algorithms runs on an 800 MHz laptop, and we present qualitative results in many application domains. Keywords and phrases: anomaly detection, dynamic backgrounds, spatio-temporal image processing, background subtraction, real-time application.
1.
INTRODUCTION
Computer vision has had the most success in wellconstrained environments. Well constrained environments allow the use of significant prior expectations, explicit or controlled background models, easily detectable features, and effective closed-world assumptions. In many surveillance applications, the environment cannot be explicitly controlled and may contain significant and irregular motion. However irregular, the natural appearance of a scene as viewed by a static video camera is often highly constrained. Developing representations of these constraints—models of the typical (dynamic) appearance of the scene—will allow significant benefits to many vision algorithms. These models capture the dynamics of video captured from a static camera of scenes such as trees waving in the wind, traffic patterns in an intersection, and waves over water. This paper develops a framework for statistical models to represent dynamic scenes. The approach is based upon spatio-temporal image analysis. This approach explicitly avoids finding or tracking image features. Instead, the video is considered to be a 3D function giving the image intensity as it varies in space (across the image) and time. The fundamental atoms of the image processing are the value of this function and the response to spatio temporal filters (such as derivative filters), measured at each pixel in each frame. Unlike interest points or features, these measurements are defined at every pixel in the video sequence. Appropriately designed filters may give robust measurements to form a basis for further processing. Optimality
criteria and algorithms for creating derivative and blurring filters of a particular size and orientation lead to significantly better results than estimating derivatives by applying Sobel filters to raw images [1]. For these reasons, spatio-temporal image processing is an ideal first ste
Data Loading...