Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:01
20 Sep 2021

Reference-based video colorization method hallucinates a plausible color version for a gray-scale video by referring distributions of possible colors from an input color frame, which has semantic correspondences with the gray-scale frames. The plausibility of colors and the temporal consistency are two significant challenges in this task. To tackle these challenges, in this paper, we propose an novel Generative Adversarial Network (GAN) with a siamese training framework. Specifically, the siamese training framework allows us to implement temporal feature augmentation, enhancing temporal consistency. Further, to improve the plausibility of colorization results, we propose a multi-scale fusion module that correlates features of reference frames to source frames accurately. Experiments on various datasets show the effect of our method in achieving colorization with higher semantic accuracy compared with existing state-of-the-art approaches, simultaneously keeping the temporal consistency among neighboring frames.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00