Multi-Scale Feature Guided Low-Light Image Enhancement
Lanqing Guo, Renjie Wan, Guan-Ming Su, Alex C. Kot, Bihan Wen
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:07:59
Low-light image enhancement aims at enlarging the intensity of image pixels to better match human perception, and to improve the performance of subsequent vision tasks. While it is relatively easy to enlighten a globally low-light image, the lighting condition of realistic scenes is usually non-uniform and complex, e.g., some images may contain both bright and extremely dark regions, with or without rich features and information. Existing methods often generate abnormal light-enhancement results with over-exposure artifacts without proper guidance. To tackle this challenge, we propose a multi-scale feature guided attention mechanism in the deep generator, which can effectively perform spatially-varying light enhancement. The attention map is fused by both the gray map and extracted feature map of the input image, to focus more on those dark and informative regions. Our baseline is an unsupervised generative adversarial network, which can be trained without any low/normal-light image pair. Experimental results demonstrate the superiority in visual quality and performance of subsequent object detection over state-of-the-art alternatives.