Learning Local Shape Descriptors from Part CorrespondencesWith Multi-view Convolutional Networks
ACM Transactions on Graphics (TOG) 2017 (also presented in SIGGRAPH 2018).
University of Massachusetts Amherst1 , Indian Institute of Technology Bombay2, Adobe Research3
We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. Our key insight is that the neighborhood of a point on a shape is effectively captured at multiple scales by a succession of progressively zoomed out views, taken from care fully selected camera positions. We propose a convolutional neural network that uses local views around a point to embed it to a multidimensional descriptor space, such that geometrically and semantically similar points are close to one another. To train our network, we leverage two extremely large sources of data. First, since our network processes 2D images, we repurpose architectures pre-trained on massive image datasets. Second, we automatically generate a synthetic dense correspondence dataset by part-aware, non-rigid alignment of a massive collection of 3D models. As a result of these design choices, our view-based architecture effectively encodes multi-scale local context and fine-grained surface detail. We demonstrate through several experiments that our learned local descriptors are more general and robust compared to state of the art alternatives, and have a variety of applications without any additional fine-tuning.
CodeCode for local MVCNN descriptor , including 1) rendering 2) viewpoint generation 3) training and extracting feature with Caffe (python layer needed)
The following archives contain the 3D models and the dense point correspondence we generated to train the LMVCNN