3D TEXTURE SUPER RESOLUTION VIA THE RENDERING LOSS
Rohit Ranade, Yangwen Liang, Shuangquan Wang, Dongwoon Bai, Jungwon Lee
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:02
Deep learning-based methods have made significant impact and demonstrated superior performance for the classical image and video super-resolution (SR) tasks. Yet, deep learning-based approaches to super-resolve the appearance of 3D objects are still sparse. Due to the nature of rendering 3D models, 2D SR methods applied directly to 3D object texture may not be a good approach. In this paper, we propose a rendering loss derived from the rendering of a 3D model and demonstrate its application to the SR task in the context of 3D texturing. Unlike other literatures on the 3D appearance SR, no geometry information of the 3D model is required during the network inference. Experimental results demonstrate that our proposed networks that incorporate the rendering loss outperform existing state-of-the-art methods for 3D appearance SR. Furthermore, we provide a new 3D dataset consisting of 97 complete 3D models for further research in this field.