Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:31
08 Jun 2021

This paper addresses the problem of blind distillation, which aims to train a student model with unlabeled data under the supervision of a pre-trained teacher model. The proposed framework introduces metric learning to blind distillation. Specifically, teacher-assisted mini-batch (TAM) sampling is proposed, which makes triplets of anchor, positive and negative samples on unlabeled data by using the teacher's knowledge. In addition, we propose a metric-based loss, namely Contrastive Additive Margin (CAM) Softmax loss, which efficiently uses all combinations of triplets on each mini-batch obtained by TAM sampling. In experiments, we show the effectiveness of the proposed framework on face and speaker verification tasks, where student models are trained on unlabeled VoxCeleb videos with a teacher model pre-trained on VGGFace2 images.

Chairs:
Dong Tian

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free