Minimal Solvers for Generalized Pose and Scale Estimation from Two Rays and One Point
Estimating the poses of a moving camera with respect to a known 3D map is a key problem in robotics and Augmented Reality applications. Instead of solving for each pose individually, the trajectory can be considered as a generalized camera. Thus, all pose
- PDF / 1,200,779 Bytes
- 17 Pages / 439.37 x 666.142 pts Page_size
- 66 Downloads / 224 Views
Department of Computer Science, ETH Zurich, Zurich, Switzerland {federico.camposeco,torsten.sattler,marc.pollefeys}@inf.ethz.ch 2 Microsoft, Redmond, USA
Abstract. Estimating the poses of a moving camera with respect to a known 3D map is a key problem in robotics and Augmented Reality applications. Instead of solving for each pose individually, the trajectory can be considered as a generalized camera. Thus, all poses can be jointly estimated by solving a generalized PnP (gPnP) problem. In this paper, we show that the gPnP problem for camera trajectories permits an extremely efficient minimal solution when exploiting the fact that pose tracking allows us to locally triangulate 3D points. We present a problem formulation based on one point-point and two point-ray correspondences that encompasses both the case where the scale of the trajectory is known and where it is unknown. Our formulation leads to closed-form solutions that are orders of magnitude faster to compute than the current stateof-the-art, while resulting in a similar or better pose accuracy. Keywords: Absolute camera pose
1
· Pose solver · Generalized cameras
Introduction
Estimating the absolute pose of a camera, i.e., the position and orientation from which an image was taken, with respect to a given 3D map is a fundamental building block in many 3D computer vision applications such as Structure-fromMotion (SfM) [27], simultaneous localization and mapping (SLAM) [5], imagebased localization [18,26,29,35], Augmented Reality (AR) [21,22], and visual navigation for autonomous vehicles [34]. Traditionally, research on camera pose estimation has mainly focused on individual cameras [8], potentially estimating the extrinsic parameters of the camera pose together with the parameters of its intrinsic calibration [2,10]. In the context of robotics applications such as autonomous drones and vehicles, it is desirable to use multi-camera systems that cover the full field-of-view around the robots. Multi-camera systems can be modelled as a generalized camera [25], i.e., a camera for which not all viewing rays intersect in a single center of projection. Accordingly, camera pose estimation for generalized cameras has started to receive attention lately [3,11,15,17,24,30,33]. Electronic supplementary material The online version of this chapter (doi:10. 1007/978-3-319-46454-1 13) contains supplementary material, which is available to authorized users. c Springer International Publishing AG 2016 B. Leibe et al. (Eds.): ECCV 2016, Part V, LNCS 9909, pp. 202–218, 2016. DOI: 10.1007/978-3-319-46454-1 13
Minimal Solvers for Generalized Pose and Scale Estimation
203
In this paper, we consider a problem typically arising in AR or video registration against SfM models [14], where visual-inertial odometry (VIO) [9] or visual odometry (VO) [23] is used to track the pose of the camera over time while registering the trajectory against a previously build 3D map acting as a reference coordinate system for the virtual objects [21]. In this scenario, both the local pose tracki
Data Loading...