Semantic part segmentation of single-view point cloud
- PDF / 388,229 Bytes
- 3 Pages / 595 x 842 pts (A4) Page_size
- 30 Downloads / 196 Views
. MOOP .
December 2020, Vol. 63 224101:1–224101:3 https://doi.org/10.1007/s11432-018-9689-9
Semantic part segmentation of single-view point cloud Haotian PENG, Bin ZHOU* , Liyuan YIN, Kan GUO & Qinping ZHAO State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing 100191, China Received 15 September 2018/Accepted 30 November 2018/Published online 28 September 2020 Citation Peng H T, Zhou B, Yin L Y, et al. Semantic part segmentation of single-view point cloud. Sci China Inf Sci, 2020, 63(12): 224101, https://doi.org/10.1007/s11432-018-9689-9
As a classic topic in computer graphics, the semantic part segmentation of 3D data is helpful for 3D part-level editing and modeling. Single-views point cloud is the raw format of 3D data. Giving each point a semantic annotation in single-view point cloud, i.e., single-view point cloud semantic part segmentation, is meaningful and challenging. In the last decades, many studies have focused on extracting effective geometric descriptors or training high-level features to perform this semantic segmentation task [1]. However, these features have primarily been extracted according to the 3D shape topology. Most traditional methods are inapplicable to 3D point cloud. Few studies have focused on part-level point cloud semantic segmentation. A few researchers have completed this task based on deep learning [2–4]. However, such methods often use multi-view synthetic point clouds as input, rather than single-view point cloud. To address this problem, we propose transferring semantic annotations from synthetic models to single-view point cloud. The pipeline of our method is shown in Figure 1. After establishing a database of 3D synthetic CAD models, we matched single-view point cloud with all the models in the database to find models for guidance. Then, we registered the single-view point cloud with the guidance models and built point-level correspondences between them. Finally, we transferred the annotations from the guidance models to the input point cloud. We tested our method on both, synthetic and real scanned single-view
point cloud datasets. The results indicate that our method can effectively segment incomplete point cloud with higher annotation accuracy and less calculation time in comparison to the existing state-of-the-art methods. Category independent matching. Using the single-view point cloud T as the input, we implemented an orthogonal projection and extracted features from the projection referring to 3DModel-Retrieval approach [5]. We also extracted features from the orthogonal projections of the universal set of the matching database. This approach measures the similarity between the 3D models via visual similarity of their projection. A total of 20 orthogonal projections of an object are encoded via both Zernike moments and Fourier descriptors as features required for later matching. These 20 projection views are distributed uniformly and can roughly represent the shape of a 3D model. Note
Data Loading...