Skip to main content

Segmentation-Aware Text-Guided Image Manipulation

Tomoki Haruyama, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:12
21 Sep 2021

We propose a novel approach that improves text-guided image manipulation performance in this paper. Text-guided image manipulation aims at modifying some parts of an input image in accordance with the user's text description by semantically associating the regions of the image with the text description. We tackle the conventional methods' problem of modifying undesired parts caused by differences in representation ability between text descriptions and images. Humans tend to pay attention primarily to objects corresponding to the foreground of images, and text descriptions by humans mostly represent the foreground. Therefore, it is necessary to introduce not only a foreground-aware bias based on text descriptions but also a background-aware bias that the text descriptions do not represent. We introduce an image segmentation network into the generative adversarial network for image manipulation to solve the above problem. Comparative experiments with three state-of-the-art methods show the effectiveness of our method quantitatively and qualitatively.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00