VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction
We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method builds up the scene model from scratch during the scanning process, thus it does not require
- PDF / 6,583,069 Bytes
- 18 Pages / 439.37 x 666.142 pts Page_size
- 96 Downloads / 228 Views
2
University of Erlangen-Nuremberg, Erlangen, Germany [email protected] Max-Planck-Institute for Informatics, Saarbr¨ ucken, Germany 3 Stanford University, Stanford, USA
Abstract. We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method builds up the scene model from scratch during the scanning process, thus it does not require a predefined shape template to start with. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth constraint. This enables accurate tracking and drastically reduces drift inherent to standard modelto-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera’s capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features.
1
Introduction
Nowadays, RGB-D cameras, such as the Microsoft Kinect, Asus Xtion Pro, or Intel RealSense, have become an affordable commodity accessible to everyday users. With the introduction of these sensors, research has started to develop efficient algorithms for dense static 3D reconstruction. KinectFusion [1,2] has shown that despite their low camera resolution and adverse noise characteristics, high-quality reconstructions can be achieved, even in real time. Follow-up work extended the underlying data structures and depth fusion algorithms in order to provide better scalability for handling larger scenes [3–6] and a higher reconstruction quality [7,8]. Video: https://youtu.be/lk yX-O Y5c. Electronic supplementary material The online version of this chapter (doi:10. 1007/978-3-319-46484-8 22) contains supplementary material, which is available to authorized users. c Springer International Publishing AG 2016 B. Leibe et al. (Eds.): ECCV 2016, Part VIII, LNCS 9912, pp. 362–379, 2016. DOI: 10.1007/978-3-319-46484-8 22
VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction
363
Fig. 1. Real-time non-rigid reconstruction result overlayed on top of RGB input
While these approaches achieve impressive results on static environments, they do not reconstruct dynamic scene elements such as non-rigidly moving objects. However, the reconstruction of deformable objects is central to a wide range of applications, and also the focus of this work. In the past, a variety of methods for dense deformable geometry tracking from multi-view camera systems [9] or a single RGB-D camera, even in real-time [10], were proposed. Unfortunately, all these methods require a complete static shape template of the tracked scene to start with; they then deform the tem
Data Loading...