Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 11 Oct 2023

To date, most existing distillation methods exploit coarse-grained instance-level information as valuable knowledge to transfer, such as instance logits, instance features and instance relations. However, the fine-grained knowledge of internal regions and relationships between semantic entities within a single instance are overlooked and not fully explored. To address above limitations, we propose a novel fine-grained patch-level distillation method, dubbed as Patch Aware Knowledge Distillation. PAKD rethink knowledge distillation from a new perspective regarding the significance of cross-layer patch alignment and patch relations within and across instances. Specifically, we first devise a novel cross-layer architecture to fuse patches across stages, which is capable of utilizing multi-level information of the teacher to guide one-level learning of the student. Then, we propose cross-layer patch alignment, allowing the student to be aware of patches discriminatively and find the best way to learn from the teacher. Besides, patch relations within and across instances are leveraged to supervise the structural knowledge distillation in the manifold space. We apply our method to image classification and object detection tasks. Consistent improvements over state-of-the-art approaches on different datasets and diverse teacher-student combinations manifest the great potential of our proposed PAKD.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00