PointFusionNet: Point feature fusion network for 3D point clouds analysis
- PDF / 2,373,870 Bytes
- 14 Pages / 595.224 x 790.955 pts Page_size
- 49 Downloads / 235 Views
PointFusionNet: Point feature fusion network for 3D point clouds analysis Pan Liang1 · Zhijun Fang1 · Bo Huang1 · Heng Zhou1 · Xianhua Tang1 · Cengsi Zhong1 Accepted: 5 October 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract The 3D point clouds is an important type of geometric data structure, and the analysis of 3D point clouds based on deep learning is a very challenging task due to the disorder and irregularity. In existing research, RS-CNN provides an effective and promising method to obtain shape features on disordered point clouds directly, which encodes local features effectively. However, RS-CNN fails to consider point-wise features and global features, which are conducive to point clouds better. In this paper, we proposed PointFusionNet, which solves these problems effectively by fusing point-wise features, local features, and global features. We have designed Feature Fusion Convolution (FF-Conv) and Global Relationship Reasoning Module (GRRM) to build PointFusionNet. The point-wise features were fused with their corresponding local features in the FF-Conv and then mapped into a high-dimensional space to extract richer local features. The GRRM inferred the relationship between various parts, in order to capture global features for enriching the content of the feature descriptor. Therefore the PointFusionNet is suitable for point clouds classification and semantic segmentation by using the two distinctive modules. The PointFusionNet has been tested on ModelNet40 and ShapeNet part datasets, and the experiments show that PointFusionNet has a competitive advantage in shape classification and part segmentation tasks. Keywords Point clouds · Feature fusion convolution · Global relationship reasoning module
1 Introduction The rapid development of 3D scanning devices and depth sensors [1] makes the 3D data acquisition easier and more convenient. The application of 3D data such as autonomous driving [2–5], 3D scene understanding, robot mapping and navigation [6, 7], 3D shape representation and modeling [8] is becoming more and more popular. Currently, there are several types of 3D shape representations: depth map, voxel, multi-view, grid, and point clouds [9]. The point clouds has received widespread attention on account of its simplicity and utility. In recent years, Convolutional Neural Networks (CNNs) have led to tremendous success in 2D computer vision tasks. However, CNNs are unable to process irregular unstructured data like point clouds directly, how to extract meaningful information from point clouds to analysis is still an important problem. Pan Liang
[email protected] 1
Shanghai University of Engineering Science, 333 Longteng Road, Songjiang District, Shanghai, Shanghai, 201620, China
In existing works, some methods [10–12] take advantage of CNNs, by converting unstructured point clouds into standard 3D grids to analyze. However, they are not efficient in terms of storage and calculation, due to the sparsity of the point clouds structure. Qi et al. [13] proposed Po
Data Loading...