MARS: parallelism-based metrically accurate 3D reconstruction system in real-time

  • PDF / 2,941,573 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 45 Downloads / 182 Views

DOWNLOAD

REPORT


SPECIAL ISSUE PAPER

MARS: parallelism‑based metrically accurate 3D reconstruction system in real‑time Shu Zhang1 · Ting Wang2 · Gongfa Li3 · Junyu Dong1 · Hui Yu4 Received: 19 December 2019 / Accepted: 6 October 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Due to the increasing application demands, lightweight device-based 3D recovery draws many attentions from a wide group of researchers in both academic and industrial fields. The current 3D reconstruction solutions are commonly achieved either using depth data or RGB data. The depth data usually come from a deliberately designed hardware for specific tasks, while the RGB data-based solutions only employ a single RGB camera with vision-based computing algorithms. Limitations are expected from both. Depth sensors are commonly either bulky or relatively expensive compared to RGB cameras, thus of less flexibility. Normal RGB cameras usually have better mobility but less accuracy in 3D sensing than depth sensors. Recently, machine learning based depth estimation has also been presented. However, its accuracy is still limited. To improve the flexibility of the 3D reconstruction system without loss in accuracy, this paper presents a solution of unconstrained Metrically Accurate 3D Reconstruction System (MARS) for 3D sensing based on a consumer-grade camera. With a simple initialization from a depth map, the system can achieve incremental 3D reconstruction with a stable metric scale. Experiments are conducted using both real-world data and public datasets. Competitive results are obtained using the proposed system compared with several existing methods. Keywords  3D reconstruction · Metrical · Outlier removal · Random down sampling

1 Introduction The perception of the depth has received wide attentions in many fields in recent years [1], such as 3D visual odometry [2], virtual reality and smart community [3, 4]. Various This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) (EP/N025849/1); National Natural Science Foundation of China (NSFC) (41906177, 41927805, 51575407); China Postdoctoral Science Foundation Grant (2019M652476); the Fundamental Research Funds for the Central Universities, China (201964022); International Science and Technology Cooperation Program of China (ISTCP) (2014DFA10410); Shandong Provincial Natural Science Foundation, China (ZR2018ZB0852). * Junyu Dong [email protected] * Hui Yu [email protected] 1



Ocean University of China, Qingdao, China

2



Shandong University of Science and Technology, Qingdao, China

3

Wuhan University of Science and Technology, Wuhan, China

4

University of Portsmouth, Portsmouth, UK



kinds of depth sensors have been developed for different applications. For example, Microsoft’s Xbox utilizes indoor depth data obtained by Kinect for gamer-console interactions; Google’s robotic cars utilize detailed 3D maps of their surroundings obtained from multiple sensors such as laser rangers to achieve autonomous driving. Currently, existing approaches