Skip to main content

MoFlowGAN: Video Generation with Flow Guidance

Wei Li, Zehuan Yuan, Xiangzhong Fang, Changhu Wang

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 07:25
09 Jul 2020

In recent years, video generation has attracted a lot of attention in the computer vision community. Unlike image generation which only focuses on appearance, video generation requires modeling both content information and motion dynamics. In this work, we propose MoFlowGAN, which explicitly models motion dynamics by a content-motion decomposition architecture with an additional flow generator. The decomposition architecture models content and motion separately and is instantiated by a compact variant of BigGAN [1]. Besides, the flow generator generates optical flow directly based on high-level feature maps of adjacent frames as a strong supervision, hence the searching space of motion patterns is highly reduced. Our proposed MoFlowGAN achieves the state-of-the-art results on both MUG facial expression and UCF-101 datasets.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00