Skip to main content

YOU ONLY NEED THE IMAGE: UNSUPERVISED FEW-SHOT SEMANTIC SEGMENTATION WITH CO-GUIDANCE NETWORK

Haochen Wang, Yandan Yang, Xiaolong Jiang, Xianbin Cao, Xiantong Zhen

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:06
28 Oct 2020

Few-shot semantic segmentation has recently attracted attention for its ability to segment unseen-class images with only a few annotated support samples. Yet existing methods not only need to be trained with a large scale of pixel-level annotations on certain seen classes, but also require a few annotated support image-mask pairs for the guidance of segmentation on each unseen class. In this paper, we propose the Co-guidance Network (CGNet) for unsupervised few-shot segmentation, which eliminates requirements of annotation on both seen and unseen classes. Specifically, CGNet segments unseen-class images with only unlabeled support images by the newly designed co-guidance mechanism. Moreover, CGNet is trained on seen classes by a novel co-existence recognition loss, which further removes the need of pixel-level annotations. Extensive experiments on the PASCAL-5i dataset show that the unsupervised CGNet performs comparably with the state-of-the-art fully-supervised few-shot methods, while largely alleviating annotation requirement.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00