Perceptual Quality Assessment Of Dibr Synthesized Views Using Saliency Based Deep Features
Shubham Chaudhary, Alokendu Mazumder, Vinit Jakhetiya, Deebha Mumtaz, Badri Subudhi
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:18
In recent years, Depth-Image-Based-Rendering (DIBR) synthesized views have gained popularity due to their numerous visual media applications. Consequently, the research in their quality assessment (QA) has also gained momentum. In this work, we propose an efficient metric to estimate the perceptual quality of DIBR synthesized views via the extraction of Deep-features. These Deep-features are extracted from a pretrained CNN model. Generally, in DIBR synthesized views, geometric distortions arise near the objects due to occlusion, and the human visual system is quite sensitive towards these objects. On the other end, saliency maps are efficiently able to highlight perceptually important objects. With this intuition, instead of extracting deep features directly from DIBR synthesized views, we obtain the refined feature vector from their corresponding saliency maps. Also, most of the pixels with geometric distortions have a nearly similar impact on the perceptual quality of 3D synthesized views. Considering this, we propose to fuse the feature maps using the cosine similarity measure based upon the deviation of one feature vector from another. It may also be emphasized that no training is performed in the proposed algorithm, and all the features are extracted from the pre-trained vanilla VGG-16 architecture. The proposed metric, when applied to the standard database, results in PLCC of 0.762 and SRCC equal to 0.7513, which is better than the existing state-of-the-art QA metrics.