LEARNING IMAGE AESTHETICS BY LEARNING INPAINTING
June Hao Ching, John See, Lai-Kuan Wong
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 08:47
Due to the high capability of learning robust features, convolutional neural networks (CNN) are becoming a mainstay solution for many computer vision problems, including aesthetic quality assessment (AQA). However, there remains the issue that learning with CNN requires time-consuming and expensive data annotations especially for a task like AQA. In this paper, we present a novel approach to AQA that incorporates self-supervised learning (SSL) by learning how to inpaint images according to photographic rules such as rules-of-thirds and visual saliency. We conduct extensive quantitative experiments on a variety of pretext tasks and also different ways of masking patches for inpainting, reporting fairer distribution-based metrics. We also show the suitability and practicality of the inpainting task which yielded comparably good benchmark results with much lighter model complexity.