**Cross-Modal 3D Shape Generation and Manipulation**
[Zezhou Cheng^1](http://people.cs.umass.edu/~zezhoucheng), [Menglei Chai^2](https://mlchai.com/), [Jian Ren^2](https://alanspike.github.io/), [Hsin-Ying Lee^2](http://hsinyinglee.com/), [Kyle Olszewski^2](https://kyleolsz.github.io/), [Zeng Huang^2](https://zeng.science/), [Subhransu Maji^1](http://people.cs.umass.edu/~smaji/) and [Sergey Tulyakov^2](http://www.stulyakov.com/)
^1 _University of Massachusetts - Amherst_
^2 _Snap Inc._
**TL;DR** We build and edit 3D shapes from 2D sketches and RGB images via a multi-modal generative model with a common latent space shared by different modalities.
**Abstract** Creating and editing the shape and color of 3D objects require tremendous human effort and expertise. Compared to direct manipulation in 3D interfaces, 2D interactions such as sketches and scribbles are usually much more natural and intuitive for the users. In this paper, we propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces. With the proposed model, versatile 3D generation and manipulation are enabled by simply propagating the editing from a specific 2D controlling modality through the latent spaces. For example, editing the 3D shape by drawing a sketch, re-colorizing the 3D surface via painting color scribbles on the 2D rendering, or generating 3D shapes of a certain category given one or a few reference images. Unlike prior works, our model does not require re-training or fine-tuning per editing task and is also conceptually simple, easy to implement, robust to input domain shifts, and flexible to diverse reconstruction on partial 2D inputs. We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images, and demonstrate that our method enables various shape manipulation and generation tasks with these 2D modalities.
Publication
==========================================================================================
**Cross-Modal 3D Shape Generation and Manipulation**
Zezhou Cheng, Menglei Chai, Jian Ren, Hsin-Ying Lee, Kyle Olszewski, Zeng Huang, Subhransu Maji, Sergey Tulyakov
European Conference on Computer Vision (ECCV) 2022
[Paper](./edit3d.pdf)
Code
==========================================================================================
[GitHub Link](https://github.com/snap-research/edit3d)