Two-Stage Point Cloud Super Resolution with Local Interpolation and Readjustment via Outer-Product Neural Network

  • PDF / 1,571,144 Bytes
  • 15 Pages / 595.276 x 841.89 pts (A4) Page_size
  • 61 Downloads / 201 Views

DOWNLOAD

REPORT


Two-Stage Point Cloud Super Resolution with Local Interpolation and Readjustment via Outer-Product Neural Network∗ WANG Guangyu · XU Gang · WU Qing · WU Xundong

DOI: 10.1007/s11424-020-9266-x Received: 17 September 2019 / Revised: 20 November 2019 c The Editorial Office of JSSC & Springer-Verlag GmbH Germany 2020 Abstract This paper proposes a two-stage point cloud super resolution framework that combines local interpolation and deep neural network based readjustment. For the first stage, the authors apply a local interpolation method to increase the density and uniformity of the target point cloud. For the second stage, the authors employ an outer-product neural network to readjust the position of points that are inserted at the first stage. Comparison examples are given to demonstrate that the proposed framework achieves a better accuracy than existing state-of-art approaches, such as PU-Net, PointNet and DGCNN (Source code is available at https://github.com/qwerty1319/PC-SR). Keywords

1

Neural network, outer-product network, point cloud, super resolution.

Introduction

Point cloud data processing is a very essential problem in geometric processing. With the wide usage of depth sensors, point cloud data are becoming readily accessible. Recently, many machine-learning based works have begun to focus on the processing of point cloud data[1–7] , and attempt to use neural networks to understand geometric features and structures. In these approaches, point cloud data are fed directly into networks in the form of 3D coordinates, and those networks can perform high-level tasks such as object classification or semantic scene segmentation. In the typical case of 2D image processing, data are generally represented as a regular matrix or tensor, and adjacent pixels can be directly obtained through simple indexing. In contrast WANG Guangyu · XU Gang · WU Qing · WU Xundong (Corresponding author) School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou 310018, China. Email: [email protected]; [email protected]; [email protected]; [email protected]. ∗ This research was supported by the NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization under Grant No. U1909210, and the National Nature Science Foundation of China under Grant Nos. 61761136010, 61772163.  This paper was recommended for publication by Editor-in-Chief GAO Xiao-Shan.

2

WANG GUANGYU, et al.

with the regular image data, a point cloud is a collection of discrete points. Each point in the collection can be considered as independent existence, and there are no direct connections between points. In addition, the original scanned point cloud are often associated with holes, noise, and incomplete scanning, hence it is a challenge to train neural networks with point cloud inputs. Convolutional neural networks have achieved outstanding performance in image and video processing. Naturally, it is desirable to exte