Skip to main content

Cascade Attention Fusion For Fine-Grained Image Captioning Based On Multi-Layer Lstm

Shuang Wang, Yun Meng, Yu Gu, Lei Zhang, Xiutiao Ye, Jingxian Tian, Licheng Jiao

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:13:42
10 Jun 2021

The conventional visual attention-based image captioning approaches typically use image information to guide caption generation. Results from these models tend to be coarse and ignore the details in the image, such as objects, attributes and the distinguishing aspects of each image. In this paper, we propose a visual and semantic fusion network with a margin-based training guidance mechanism to generate fine image descriptions that depict more objects, attributes and other distinguishing aspects of images. In our model, the visual attention layer introduces more low-level visual information, the semantic attention layer provides more high-level semantic attributes. Furthermore, the proposed margin-based loss encourages our model to produce more discriminative descriptions. Extensive experiments are conducted on COCO and Flickr30K image captioning datasets to validate our method, and the results show its superior performance at captioning. Our method achieves a state-of-the-art 70.6 CIDEr-D on the Flickr30K dataset, and a competitive 123.5 CIDEr-D on the MS-COCO dataset.

Chairs:
Soohyun Bae

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00