Feature Disentanglement For Cross-Domain Retina Vessel Segmentation
Jie Wang, Chaoliang Zhong, Cheng Feng, Jun Sun, Yasuto Yokota
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:07:51
Domain shift is regarded as a key factor affecting the robustness of many models. Recently, unsupervised auxiliary learning (e.g., input reconstruction) has been proposed to improve the model??s domain transferability and alleviate cross-domain performance degradation; however, in the paradigm of existing approaches, the features extracted from various tasks are shared, which mixes the domain-invariant features from the main task and domain-specific feature from the auxiliary task, leading to an imperfect learning. To solve this problem, we propose a novel unsupervised domain adaptation method - the Disentangled Reconstruction Neural Network (DRNN) - for cross-domain retina vessel segmentation. DRNN leverages two tandem nets and disentangles the domain-invariant features and the domain-specific features in the multi-task learning process. We perform extensive experiments on public retina datasets and our proposed DRNN outperforms the competitors by a significant margin to achieve state-of-the-art results pertaining to retina vessel segmentation.