RESTORABLE VISIBLE AND INFRARED IMAGE FUSION
Jihun Kang, Daichi Horita, Koki Tsubota, Kiyoharu Aizawa
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Image fusion aims to synthesize multiple source images into a single image to integrate and enhance information. Specifically, we tackle the fusion of visible and infrared images. Previous works generally use structural similarity between the fusion and the paired source images to train a deep-learning-based fusion model. However, only using the structure often results in a texture-insufficient image. In this study, we aim to generate an image rich in texture. This study is inspired by the ability of an autoencoder to learn a compressed representation of the input image. Specifically, we learn a fusion image with the structure and texture of the source images. We propose a novel framework–Restorable visible and infrared Image Fusion, which consists of a fusion and decoupling network. The fusion network synthesizes source images, and the decoupling network restores the source images by decomposing a fusion image. Our framework can be trained by minimizing the difference between the source and restored images. The experimental results demonstrate that the fusion image generated by the proposed method maintains the texture of the source images.