Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation

Huaizu Jiang1     Deqing Sun2    Varun Jampani 2     Ming-Hsuan Yang3,2    Erik Learned-Miller1    Jan Kautz2

1UMass Amherst     2 NVIDIA     3 UC Merced

Abstract

Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. We use 1,132 video clips with 240-fps, containing 300K individual video frames, to train our network. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.

Video

More results

Paper

Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, Jan Kautz. Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation. CVPR, 2018 (spotlight). [PDF][CVPR spotlight video]

Results

Interpolation results and evaluation script on UCF101.

FAQ

Q: Is the source code available?

A: Unforuntately, we are unable to publish the code. Implementation of SuperSloMo in PyTorch is pretty straightforward. Feel free to contact Huaizu Jiang in case of any questions.

Super SloMo will be released in the NVIDIA NGX.

Q: Is the YouTube-240fps dataset available?

A: Unfortunately, we are unable to share the dataset. You might find the Adobe 240-fps dataset helpful (you might need to contact authors to obtain high-frame-rate videos).

Q: How fast is SuperSloMo?

A: With our unoptimized PyTorch code, it takes 0.97s and 0.79s to generate 7 intermediate frames given two images of 1280*720 resolution on single NVIDIA GTX 1080Ti and Tesla V100 GPUs, respectively.

Acknowledgments

We would like to thank Oliver Wang for generously sharing the Adobe 240-fps data. Yang acknowledges support from NSF (Grant No. 1149783).


Contact: hzjiang AT cs.umass.edu, deqings AT nvidia.com