DIVERGENCE-GUIDED FEATURE ALIGNMENT FOR CROSS-DOMAIN OBJECT DETECTION
Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:59
Domain shift causes performance drop in cross-domain object detection. To alleviate the domain shift, a prevailing approach is global feature alignment with adversarial learning. However, such simple feature alignment has defects of unawareness of foreground/background regions and well-aligned/poorly-aligned regions. To remedy the defects, in this paper, we propose a novel divergence-guided feature alignment method for cross-domain object detection. Specifically, we generate source-like images of the target domain and seek cues of foreground regions and poorly-aligned regions from prediction divergence of the source-like and original images. The feature alignment is guided by the divergence maps and consequently results in adaptation performance superior to alignment unaware of the cues. Different from most previous studies focusing on two-stage object detection, this paper is devoted to adapting one-stage object detectors which have simpler and faster inference. We validated the effectiveness of our method by conducting experiments in cross-weather, cross-camera, and synthetic-to-real adaptation scenarios.