Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 07:22
04 May 2020

Domain adversarial training is a popular approach for Unsupervised Domain Adaptation~(DA). However, the transferability of adversarial training framework may drop greatly on the adaptation tasks with a large distribution divergence between source and target domains. In this paper, we propose a new approach termed Adversarial Mixup Synthesis Training~(AMST) to alleviate the issue. The AMST augments the training with synthesis samples by linearly interpolating between pairs of hidden representations and their domain labels. By this means, AMST encourages the model to make consistency domain prediction less confidently on interpolations points, which learn domain-specific representations with fewer directions of variance. Based on the previous work, we conduct a theoretical analysis on this phenomenon under ideal conditions and show that AMST could improve generalization ability. Finally, experiments on benchmark dataset demonstrate the effectiveness and practicability of AMST. We will publicly release our code on github soon.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00