Disentangled Representation Learning For Deep Mr To Ct Synthesis Using Unpaired Data
Runze Wang, Guoyan Zheng
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:19
Many different methods have been proposed for generation of synthetic CT (sCT) from MR images. Most of these methods depend on paired-wise aligned MR and CT training images of the same patient, which are difficult to obtain. In this paper, we propose a novel disentangled representation learning method for MR to CT synthesis using unpaired data. Specifically, we first embed images onto two spaces: a modality- invariant geometry space capturing the shared anatomical information across different imaging domains, and a modality- specific appearance space. From the embedding, a sCT image can be synthesized from a MR image by taking the encoded geometry features from the MR image and an appearance vector sampled from the appearance space of a CT image. To handle the challenging of distinguishing cortical bone from air in MR images, where both of them have low intensity values, we propose a novel Geometry Similarity Module (GSM) to take the context information into consideration. Experimental results demonstrated that our approach achieved better or equivalent results than the state-of-the-art.