A Novel Distributed Privacy Paradigm for Visual Sensor Networks Based on Sharing Dynamical Systems
- PDF / 2,215,349 Bytes
- 17 Pages / 600.03 x 792 pts Page_size
- 33 Downloads / 171 Views
Research Article A Novel Distributed Privacy Paradigm for Visual Sensor Networks Based on Sharing Dynamical Systems William Luh, Deepa Kundur, and Takis Zourntos Department of Electrical and Computer Engineering, 214 Zachry Engineering Center, Texas A&M University, College Station, TX 77843-3128, USA Received 5 January 2006; Revised 29 April 2006; Accepted 30 April 2006 Recommended by Chun-Shien Lu Visual sensor networks (VSNs) provide surveillance images/video which must be protected from eavesdropping and tampering en route to the base station. In the spirit of sensor networks, we propose a novel paradigm for securing privacy and confidentiality in a distributed manner. Our paradigm is based on the control of dynamical systems, which we show is well suited for VSNs due to its low complexity in terms of processing and communication, while achieving robustness to both unintentional noise and intentional attacks as long as a small subset of nodes are affected. We also present a low complexity algorithm called TANGRAM to demonstrate the feasibility of applying our novel paradigm to VSNs. We present and discuss simulation results of TANGRAM. Copyright © 2007 Hindawi Publishing Corporation. All rights reserved.
1.
INTRODUCTION
Visual data is an integral part of the interface between humans and their environment. Visual data in the form of images and video can be used to enhance a human operator’s ability to reliably make crucial decisions in the face of alerts provided by sensing mechanisms. For example, in a combat field, a sensor network can be deployed to sense temperature, toxins, vibrations/movement, and so forth. To reliably assess whether a change in the sensed phenomena is due to enemy infiltration or natural environmental and fauna causes, it is useful to obtain additional side information in the form of an image. As another example, in health care facilities [1, 2], one may measure a patient’s vital statistics, such as heart rate, using sensors. When such measured statistics indicate that the patient is in imminent danger, visual side information may quickly determine whether the measurements are valid or caused by misplaced or malfunctioning sensors. Following this motivation, acquisition of visual data in sensor networks can be used to enhance the quality of service in surveillance applications in which a human operator interfaces at the sink of the network [3]. Such sensor networks are called visual sensor networks (VSNs) or often multimedia sensor networks [4]. The emergence of low-cost portable off-the-shelf sensor devices has thrust forward the development of VSN architectures, systems, and testbeds [1– 3, 5–12].
Acquisition and processing of visual data in sensor networks come at a cost. First, visual data in the form of images or video require larger storage and transmission resources than do traditional scalar data such as temperature or heart rate. These resource requirements are further bloated when every sensor is equipped to acquire and process images and video. Furthermore, image processing re
Data Loading...