Semantic segmentation of surface from lidar point cloud
- PDF / 3,621,495 Bytes
- 21 Pages / 439.642 x 666.49 pts Page_size
- 115 Downloads / 256 Views
Semantic segmentation of surface from lidar point cloud Aritra Mukherjee1 · Sourya Dipta Das2 · Jasorsi Ghosh2 · Ananda S. Chowdhury2 · Sanjoy Kumar Saha1 Received: 29 April 2020 / Revised: 13 July 2020 / Accepted: 9 September 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Mapping the environment for robot navigation is an important and challenging task in SLAM (Simultaneous Localization And Mapping). Lidar sensor can produce near accurate 3D map of the environment in real time in form of point clouds. Though the point cloud data is adequate for building the map of the environment, processing millions of points in a point cloud is found to be computationally expensive. In this paper, we propose a fast algorithm that can be used to extract semantically labelled surface segments from the cloud in real time for direct navigational use or for higher level contextual scene reconstruction. First, a single scan from a spinning Lidar is used to generate a mesh of sampled cloud points. The generated mesh is further used for surface normal computation of a set of points on the basis of which surface segments are estimated. A novel descriptor is proposed to represent the surface segments. This descriptor is used to determine the surface class (semantic label) of the segments with the help of a classifier. These semantic surface segments can be further utilized for geometric reconstruction of objects in the scene or for optimized trajectory planning of a robot. The proposed method is compared with a number of point cloud segmentation methods and state of the art semantic segmentation methods to demonstrate its efficacy in terms of speed and accuracy. Keywords Semantic surface segmentation · 3D point cloud processing · Lidar Data · Meshing
1 Introduction 3D mapping of the environment is an important problem for various robotic applications and is one of the two pillars of SLAM (Simultaneous Localization And Mapping) for mobile
Sanjoy Kumar Saha
1
Department of Computer Science, Engineering, Jadavpur University, Kolkata, India
2
Department of Electronics, Telecommunication Engineering, Jadavpur University, Kolkata, India
Multimedia Tools and Applications
robots. Various kinds of sensors are in use to achieve the goal. Stereo vision cameras are one of the cheapest solution and works satisfactorily for well lit, textured environments but fails for the places lacking unique image features. Structured light and Time of Flight (ToF) cameras gives real time depth information for pixels in the image of a scene (RGBD) and is good for indoor usage. But in the presence of strong light i.e. in outdoor environments, its efficiency suffers a lot. Lidar is the primary choice for mobile robots working in the environments with diverse illumination and structural features. Lidar works on the principle of measuring time of flight of short signature bursts of laser that can be filtered out from other forms of radiations. As a result its robustness and range are increased. The downside of Lidar is i
Data Loading...