Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

  • PDF / 3,121,192 Bytes
  • 11 Pages / 600.03 x 792 pts Page_size
  • 58 Downloads / 183 Views

DOWNLOAD

REPORT


Research Article Determining Vision Graphs for Distributed Camera Networks Using Feature Digests Zhaolin Cheng, Dhanya Devarajan, and Richard J. Radke Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA Received 4 January 2006; Revised 18 April 2006; Accepted 18 May 2006 Recommended by Deepa Kundur We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length “feature digest” that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (> 0.8) can be achieved while maintaining low false alarm rates (< 0.05) using a simulated 60-node outdoor camera network. Copyright © 2007 Zhaolin Cheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1.

INTRODUCTION

The automatic calibration of a collection of cameras (i.e., estimating their position and orientation relative to each other and to their environment) is a central problem in computer vision that requires techniques for both detecting/matching feature points in the images acquired from the collection of cameras and for subsequently estimating the camera parameters. While these problems have been extensively studied, most prior work assumes that they are solved at a single processor after all of the images have been collected in one place. This assumption is reasonable for much of the early work on multi-camera vision in which all the cameras are in the same room (e.g., [1, 2]). However, recent developments in wireless sensor networks have made feasible a distributed camera network, in which cameras and processing nodes may be spread over a wide geographical area, with no centralized processor and limited ability to communicate a large amount of information over long distances. We will require new techniques for correspondence and calibration that are well suited to such distributed camera networks—techniques that take explicit account of the underlying communication network and its constraints. In this paper, we address the problem of efficiently estimating the vision graph for an ad-hoc camera network, in

which each camera is represented by a node, and an edge appears between two nodes if the two cameras jointly image a sufficiently