DEGRADATION CONDITIONED GAN FOR DEGRADATION GENERALIZATION OF FACE RESTORATION MODELS
Qi Song, Wu Shi, Guojing Ge, Liang Chang
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Face restoration models are usually trained on synthetic degraded data to output an image that matches the clean version of itself. Most previous methods use a single model to deal with all the degradation levels, resulting in a domain generalization problem. We explore the value of degradation information and propose a Degradation Conditioned GAN (DeCGAN). The architecture consists of modulated convolution, bias, and fusion modules, inspired by deblurring, denoising and super-resolution. The whole network can be modulated by the degradation levels to achieve delicate and precise restoration effects. Experiments are conducted on conventional and modulated face restoration tasks. DeCGAN can achieve more faithful restoration and better metrics (FID, LPIPS, etc.) than previous methods do. Moreover, our model performs well on real-world low-quality face images.