Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 08:33
26 Oct 2020

Generation-based image inpainting methods can capture semantic features but fail to generate consistent details and high image quality results due to highly abstract feature learning and the instability of GAN training. Current methods try to overcome these disadvantages but they either need additional marginal maps or are not suitable for different shapes of occlusion. In this paper, we introduce an adaptive hierarchical feature fusion network (AHFF-Net). Without additional maps, our method can obtain consistent edges and high-quality results with different occlusions. Specifically, to guarantee the consistency of low-level features, our hierarchical fusion generator captures and aggregates multi-scale and multi-level context features. To get the high-quality results, the conditional self-supervised discriminator pay more attention to the unknown area by conditional GAN loss and stabilize the training process by conditional rotation loss. The proposed network achieves the state-of-the-art consistently on the Paris StreetView and Places365-Standard datasets with three shapes of masks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00