ASSET: Autoregressive Semantic Scene Editing with Transformers
at High Resolutions

ACM Transactions on Graphics (SIGGRAPH 2022)
Difan Liu    Sandesh Shetty    Tobias Hinz    Matthew Fisher    Richard Zhang    Taesung Park    Evangelos Kalogerakis
[Paper]
[Github]

Abstract

We present ASSET, a neural architecture for automatically modifying an input high-resolution image according to a user's edits on its semantic segmentation map. Our architecture is based on a transformer with a novel attention mechanism. Our key idea is to sparsify the transformer's attention matrix at high resolutions, guided by dense attention extracted at lower image resolutions. While previous attention mechanisms are computationally too expensive for handling high-resolution images or are overly constrained within specific image regions hampering long-range interactions, our novel attention mechanism is both computationally efficient and effective. Our sparsified attention mechanism is able to capture long-range interactions and context, leading to synthesizing interesting phenomena in scenes, such as reflections of landscapes onto water or flora consistent with the rest of the landscape, that were not possible to generate reliably with previous convnets and transformer approaches. We present qualitative and quantitative results, along with user studies, demonstrating the effectiveness of our method.


Results on Flickr-Landscape



Results on COCO-Stuff and ADE20K



Paper

D. Liu, S. Shetty, T. Hinz, M. Fisher,
R. Zhang, T. Park, E. Kalogerakis.
ASSET: Autoregressive Semantic Scene Editing with Transformers
at High Resolutions

SIGGRAPH, 2022.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This work is funded by Adobe Research.