One-Cycle Pruning: Pruning Convnets With Tight Training Budget
Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:39
Despite the recent success of deep learning architectures, person re-identification (ReID) remains a challenging problem. Several single-target domain adaptation (STDA) methods have recently been proposed to limit the decline in ReID accuracy caused by the domain shift that typically occurs between source and target video data. Although multi-target DA (MTDA) has not been widely addressed in the ReID literature, a straightforward approach consists in blending different target datasets, and performing STDA on the mixture to train a common CNN. However, this approach may lead to poor generalization, especially when blending a growing number of distinct target domains to train a smaller CNN. To alleviate this problem, we introduce a new MTDA method based on knowledge distillation (KD-ReID) that is suitable for real-time person ReID. Our method adapts a common lightweight student backbone CNN over the target domains by alternatively distilling from multiple specialized teacher CNNs, each one adapted on data from a specific target domain. Extensive experiments conducted on several challenging person ReID datasets indicate that our approach outperforms state-of-art methods for MTDA, including blending methods, particularly when training a compact CNN backbone like OSNet. Results suggest that our approach can be employed in cost-effective ReID systems for real-time applications.