TRANSFORMATION CONSISTENCY FOR REMOTE SENSING IMAGE SUPER-RESOLUTION
Kai Deng, Ping Yao, Siyuan Cheng, Junyu Bi, Kun Zhang
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Single Image Super-Resolution (SISR) based on deep learning methods has been widely studied for applications on remote sensing images. With limited remote sensing images, most of the existing SISR methods simply adopt the regular data augmentation approaches (such as flip) in natural images to improve model performance. Considering the fact that remote sensing images are all taken from a bird’s-eye view and objects appear in multiple directions, we first introduce rotation augmentation method in remote sensing images to promote diversity of samples dramatically, as rotation does not cause semantic problems like people standing upside down in natural images. However, image rotation at various angles implemented by interpolation will cause the inconsistent pixel distribution problem for the pixel level task. Thus, we propose Transformation Consistency Loss Function to narrow the gap between the augmented and original distribution, while expanding the feature space with rotation augmentation method. Extensive experiments are performed on UC-Merced Land-use dataset of 21 remote sensing scenes, and the results as well as ablation studies demonstrate our proposed method outperforms mainstream methods.