Incremental Support Vector Machine Framework for Visual Sensor Networks

  • PDF / 1,003,807 Bytes
  • 15 Pages / 600.03 x 792 pts Page_size
  • 27 Downloads / 265 Views

DOWNLOAD

REPORT


Research Article Incremental Support Vector Machine Framework for Visual Sensor Networks Mariette Awad,1, 2 Xianhua Jiang,2 and Yuichi Motai2 1 IBM

Systems and Technology Group, Department 7t Foundry, Essex Junction, VT 05452, USA of Electrical and Computer Engineering, The University of Vermont, Burlington, VT 05405, USA

2 Department

Received 4 January 2006; Revised 13 May 2006; Accepted 13 August 2006 Recommended by Ching-Yung Lin Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication. Copyright © 2007 Mariette Awad et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1.

INTRODUCTION

Visual sensor networks with embedded computing and communications capabilities are increasingly the focus of an emerging research area aimed at developing new network structures and interfaces that drive novel, ubiquitous, and distributed applications [1]. These applications often attempt to bridge the last interconnection between the outside physical world and the World Wide Web by deploying sensor networks in dense or redundant formations that alleviate hardware failure and loss of information. Machine learning in visual sensor networks is a very useful technique if it reduces the reliance on a priori knowledge. However, it is also very challenging to implement. Additionally it is subject to the constraints of computing capabilities, fault tolerance, scalability, topology, security and power c