Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:46
28 Oct 2020

In remote sensing, “pan-sharpening” is the task of enhancing the spatial resolution of a multi-spectral (MS) image by exploiting the high-frequency information in a panchromatic (PAN) reference image. We present a novel color-aware perceptual (CAP) loss for learning the task of pan-sharpening. Our CAP loss is designed to focus on the deep features of a pre-trained VGG network that are more sensitive to spatial details and ignore color information to allow the network to extract the structural information from the PAN image while keeping the color from the lower resolution MS image. Additionally, we propose “guided re-colorization”, which generates a pan-sharpened image with real colors from the MS input by “picking” the closest MS pixel color for each pan-sharpened pixel, as a human operator would do in manual colorization. Such a re-colorized (RC) image is completely aligned with the pan-sharpened (PS) network output and can be used as a self-supervision signal during training, or to enhance the colors in the PS image during test. We present several experiments where our network trained with our CAP loss generates naturally looking pan-sharpened images with fewer artifacts and out-performs the state-of-the-arts on the WorldView3 dataset in terms of ERGAS, SCC, and QNR metrics.