A GENERATIVE ADVERSARIAL NETWORK FOR MEDICAL IMAGE FUSION
Zhuliang Le, Jun Huang, Fan Fan, Xin Tian, Jiayi Ma
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:41
In this paper, a novel end-to-end model for fusing medical images characterizing structural information, i.e., I_S, and images characterizing functional information, i.e., I_F, of different resolutions is proposed, which is achieved by using a conditional generative adversarial network with multiple generators and multiple discriminators (MGMDcGAN). In the first cGAN, a real-like fused image is generated by a generator, simultaneously fooling two discriminators. While the discriminators are to distinguish the fused image from source images. Besides, to prevent the functional information from being weakened in the final fused image when enhancing the dense structure information, we employ the second cGAN with a mask calculated. Meanwhile, the structural information in I_S and the functional information in I_F the final fused image can be concurrently kept. Furthermore, our MGMDcGAN is a unified method, which is applicable to different kinds of medical image fusion, including MRI-PET, MRI-SPECT, and CT-SPECT. Extensive experiments on publicly available datasets substantiate the superiority of our MGMDcGAN over the current state-of-the-art.