Multi-sensor data fusion for accurate surface modeling
- PDF / 2,509,163 Bytes
- 14 Pages / 595.276 x 790.866 pts Page_size
- 15 Downloads / 198 Views
METHODOLOGIES AND APPLICATION
Multi-sensor data fusion for accurate surface modeling Mahesh K. Singh1 · Ashish Dutta3 · K. S. Venkatesh2
© Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract Multi-sensor data fusion is advantageous while fusing data from heterogeneous range sensors, for scanning a scene containing both fine and coarse details. This paper presents a new multi-sensor range data fusion method with the aim to increase the descriptive contents of the entire generated surface model. First, a new training framework of the scanned range dataset to solve the relaxed Gaussian mixture model-based method by applying the convex relaxation technique is presented. The classification of the range data is based on a trained statistical model. In the data fusion experiments, a laser range sensor and Kinect (V1) are used. Based on the segmentation criterion, the range data fusion is performed by integration of the finer regions range data obtained from a laser range sensor with the coarser regions of the Kinect range data. The fused range information overcomes the weaknesses of the respective range sensors, i.e., the laser scanner is accurate but takes time while the Kinect is fast but not very accurate. The surface model of the fused range dataset generates a highly accurate, realistic surface model of the scene. The experimental results demonstrate robustness of the proposed approach. Keywords Surface reconstruction · Data fusion · Gaussian mixture model · Convex relaxation · Laser range sensor · Kinect (V1)
1 Introduction Multi-sensor range data fusion is the process of integrating 3D information from redundant and/or complementary sensing devices to generate a complete and accurate representation of the scene. The generation of a threedimensional model of the environments (or objects) is Communicated by V. Loia. Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00500-020-04797-9) contains supplementary material, which is available to authorized users.
B
Mahesh K. Singh [email protected] Ashish Dutta [email protected] K. S. Venkatesh [email protected]
1
Department of Electronics and Communication Engineering, National Institute of Technology Delhi, Delhi 110040, India
2
Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India
3
Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India
required for robot motion planning, computer vision, virtual reality, industrial design, prototyping, cultural heritage documentation, action recognition, surveillance, etc. The above-mentioned applications require the accurate perception of the three-dimensional structure to deal with complex scenes, especially the finer details of regions of interest. For example, the availability of accurate 3D surface is crucial for path planning and tracking of an autonomous mobile robot to find the optimal path between two locations. The active range sensors, based on coherent (laser) and
Data Loading...