Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Towards Smooth Video Composition

Qihang Zhang · Ceyuan Yang · Yujun Shen · Yinghao Xu · Bolei Zhou

MH1-2-3-4 #66

Keywords: [ Generative models ] [ video generation ] [ generative adversarial network ]


Abstract:

Video generation, with the purpose of producing a sequence of frames, requires synthesizing consistent and persistent dynamic contents over time. This work investigates how to model the temporal relations for composing a video with arbitrary number of frames, from a few to even infinite, using generative adversarial networks (GANs). First, towards composing adjacent frames, we show that the alias-free operation for single image generation, together with adequately pre-learned knowledge, bring a smooth frame transition without harming the per-frame quality. Second, through incorporating a temporal shift module (TSM), which is originally designed for video understanding, into the discriminator, we manage to advance the generator in synthesizing more reasonable dynamics. Third, we develop a novel B-Spline based motion representation to ensure the temporal smoothness, and hence achieve infinite-length video generation, going beyond the frame number used in training. We evaluate our approach on a range of datasets and show substantial improvements over baselines on video generation. Code and models are publicly available at \url{https://genforce.github.io/StyleSV}.

Chat is not available.