Gpg-Net: Face Inpainting With Generative Parsing Guidance
Yuelong Li, Jialiang Yan, Jianming Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:53
Face inpainting is a meaningful but challenging task in the fields of computer vision and image processing. As well known, the restoration of the overall structure information is critical to a successful image inpainting. Hence, in this paper, we enroll face parsing to assist facial image reconstruction. Clearly, intact facial images contain extensive details, which may be difficult to perfectly recover when the images are seriously damaged. While their corresponding parsing maps are much more pure, where only the overall structure information are accommodated. Therefore, the recovery of face parsing map is a quite simple and easily conducted task. Base on this idea, a two stage based face inpainting framework, namely Generative Parsing Guidance Network (GPG-Net), is worked out. Moreover, a Semantic Compensation Module (SCM) is fused to ensure effective context information aggregation, while a Contextual Attention Module (CAM) is brought in to further improve the appearance rationality. Experiments are extensively conducted on the publicly available CelebA-HQ dataset to verify the effectiveness of the proposed approach.