RGB-D camera calibration and trajectory estimation for indoor mapping

  • PDF / 4,354,214 Bytes
  • 19 Pages / 595.276 x 790.866 pts Page_size
  • 93 Downloads / 244 Views

DOWNLOAD

REPORT


RGB-D camera calibration and trajectory estimation for indoor mapping Liang Yang2,3 · Ivan Dryanovski1 · Roberto G. Valenti2 · George Wolberg2 · Jizhong Xiao2 Received: 3 April 2019 / Accepted: 27 July 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract In this paper, we present a system for estimating the trajectory of a moving RGB-D camera with applications to building maps of large indoor environments. Unlike the current most researches, we propose a ‘feature model’ based RGB-D visual odometry system for a computationally-constrained mobile platform, where the ‘feature model’ is persistent and dynamically updated from new observations using a Kalman filter. In this paper, we firstly propose a mixture of Gaussians model for the depth random noise estimation, which is used to describe the spatial uncertainty of the feature point cloud. Besides, we also introduce a general depth calibration method to remove systematic errors in the depth readings of the RGB-D camera. We provide comprehensive theoretical and experimental analysis to demonstrate that our model based iterative-closest-point (ICP) algorithm can achieve much higher localization accuracy compared to the conventional ICP. The visual odometry runs at frequencies of 30 Hz or higher, on VGA images, in a single thread on a desktop CPU with no GPU acceleration required. Finally, we examine the problem of place recognition from RGB-D images, in order to form a pose-graph SLAM approach to refining the trajectory and closing loops. We evaluate the effectiveness of the system on using publicly available datasets with ground-truth data. The entire system is available for free and open-source online. Keywords RGB-D · Computer vision · 3D mapping · Camera calibration

1 Introduction

Liang Yang and Ivan Dryanovski have contribute equally to this paper. This work is supported in part by U.S. Army Research Office under Grant No. W911NF-09-1-0565, U.S. National Science Foundation under Grants No. IIS-0644127 and No. CBET-1160046, Federal High-Way Administration (FHWA) under Grant Nos. DTFH61-12-H-00002 and PSC-CUNY under Grant No. 65789-00-43. Electronic supplementary material The online version of this article (https://doi.org/10.1007/s10514-020-09941-w) contains supplementary material, which is available to authorized users.

B

An RGB-D camera is a device which provides two concurrent image streams: a conventional color image, and a depth image, containing a measure of the distance from the camera to each observed pixel along the optical axis. The two images can be used together to obtain a dense, textured 3D model of the observed scene. The properties of RGB-D data, together with the low cost of current devices, have made RGB-D cameras very popular among the computer vision and robotics communities. RGBD data has been used in various applications, including visual odometry, SLAM, scene modeling, and object recognition.

Jizhong Xiao [email protected] 1

Ivan Dryanovski [email protected]

Department of Computer Science, The Grad

Data Loading...