Single Image 3D Interpreter Network
Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D
- PDF / 4,438,288 Bytes
- 18 Pages / 439.37 x 666.142 pts Page_size
- 1 Downloads / 213 Views
1
Massachusetts Institute of Technology, Cambridge, USA [email protected] 2 Stanford University, Stanford, USA 3 Facebook AI Research, Menlo Park, USA 4 Google Research, Cambridge, USA
Abstract. Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Network (3D-INN), an endto-end framework which sequentially estimates 2D keypoint heatmaps and 3D object structure, trained on both real 2D-annotated images and synthetic 3D data. This is made possible mainly by two technical innovations. First, we propose a Projection Layer, which projects estimated 3D structure to 2D space, so that 3D-INN can be trained to predict 3D structural parameters supervised by 2D annotations on real images. Second, heatmaps of keypoints serve as an intermediate representation connecting real and synthetic data, enabling 3D-INN to benefit from the variation and abundance of synthetic 3D objects, without suffering from the difference between the statistics of real and synthesized images due to imperfect rendering. The network achieves state-of-the-art performance on both 2D keypoint estimation and 3D structure recovery. We also show that the recovered 3D information can be used in other vision applications, such as image retrieval. Keywords: 3D structure · Single image 3D reconstruction estimation · Neural network · Synthetic data
1
· Keypoint
Introduction
Deep networks have achieved impressive performance on 1, 000-way image classification [19]. However, for any visual system to parse objects in the real world, J. Wu and T. Xue are equal contributions. Electronic supplementary material The online version of this chapter (doi:10. 1007/978-3-319-46466-4 22) contains supplementary material, which is available to authorized users. c Springer International Publishing AG 2016 B. Leibe et al. (Eds.): ECCV 2016, Part VI, LNCS 9910, pp. 365–382, 2016. DOI: 10.1007/978-3-319-46466-4 22
366
J. Wu et al.
Fig. 1. An abstraction of the proposed 3D INterpreter Network (3D-INN).
it needs not only to assign category labels to objects, but also to interpret their intra-class variation. For example, for a chair, we are interested in its intrinsic properties such as its style, height, leg length, and seat width, and extrinsic properties such as its pose. In this paper, we recover these object properties from a single image by estimating 3D structure. Instead of a 3D mesh or a depth map [2,9,16,18,32,40, 50], we represent an object via a 3D skeleton [47], which consists of keypoints and the connections between them (Fig. 1c). Being a simple abstraction, the skeleton representation preserves the structural properties that we are interested in. In this paper, we assume one pre-defined skeleton model for each object category (e.g. chair, sofa,
Data Loading...