Skip to main content

ATTEN-ADAPTER: A UNIFIED ATTENTION-BASED ADAPTER FOR EFFICIENT TUNING

Kaiwen Li, Wenzhe Gu, Maixuan Xue, Jiahua Xiao, Dahu Shi, Xing Wei

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 10 Oct 2023

Recently, more and more large pre-trained models have emerged. Several parameter-efficient tuning methods have been studied to transfer the prior knowledge of the pre-trained models to specific downstream tasks and achieve promising results. This paper proposes a simple yet effective method called Atten-Adapter. To the best of our knowledge, this is the first work that utilizes attention with learnable parameters as the internal structure of the adapter in the field of fine-tuning. The attention-based adapter can provide better information fusion ability and pay more attention to the global features compared to the MLP-based adapter. As a plug-and-play module, Atten-Adapter can be easily adapted to different types of vision models such as ConvNets and Transformer architectures in different tasks like classification and segmentation. Moreover, we demonstrate the generality of our proposed adapters by conducting experiments on language models. With small amounts of tunable parameters, our method achieves significant improvements compared to the previous state-of-the-art methods.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00