Visual Sensor Networks

  • PDF / 329,679 Bytes
  • 3 Pages / 600.03 x 792 pts Page_size
  • 41 Downloads / 226 Views

DOWNLOAD

REPORT


Editorial Visual Sensor Networks Deepa Kundur,1 Ching-Yung Lin,2 and Chun-Shien Lu3 1 Wireless

Communications Lab, Electrical and Computer Engineering Departement, Texas A&M University, College Station, TX 77843-3128, USA 2 Distributed Computing Departement, IBM T.J. Watson Research Center, Hawthorne, NY 10532, USA 3 Institute of Information Science, Academia Sinica, Taipei 11529, Taiwan Received 17 January 2007; Accepted 17 January 2007 Copyright © 2007 Deepa Kundur et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Research into the design, development, and deployment of networked sensing devices for high-level inference and surveillance of the physical environment has grown tremendously in the last few years. This trend has been motivated, in part, by recent technological advances in electronics, communication networking, and signal processing. Sensor networks are commonly comprised of lightweight distributed sensor nodes, such as low-cost video cameras. There is inherent redundancy in the number of nodes deployed and corresponding networking topology. Operation of the network requires autonomous peer-based collaboration amongst the nodes and intermediate data-centric processing amongst local sensors. The intermediate processing known as in-network processing is application-specific. Often, the sensors are untethered so that they must communicate wirelessly and be battery-powered. Initial focus was placed on the design of sensor networks in which scalar phenomena such as temperature, pressure, or humidity were measured. It is envisioned that much societal use of sensor networks will also be based on employing content-rich vision-based sensors. The volume of data collected as well as the sophistication of the necessary in-network stream content processing provides a diverse set of challenges in comparison generic scalar sensor network research. Applications that will be facilitated through the development of visual sensor networking technology include automatic tracking, monitoring and signaling of intruders within a physical area, assisted living for the elderly or physically disabled, environmental monitoring, and command and control of unmanned vehicles. Many current video-based surveillance systems have centralized architectures that collect all visual data at a central location for storage or real-time interpretation by a human operator. The use of distributed processing for automated event

detection would significantly alleviate human operators from mundane or time-critical activities, and provides better network scalability. Thus, it is expected that video surveillance solutions of the future will successfully utilize visual sensor networking technologies. Given that the field of visual sensor networking is still in its infancy, it is critical that researchers from the diverse disciplines including signal processing, communications, an