Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 16:53
12 Apr 2023

Omnidirectional video, also known as 360-degree video, has become increasingly popular nowadays due to its ability to provide immersive and interactive visual experiences. However, the ultra-high resolution and the spherical observation space brought by the large spherical viewing range make omnidirectional video distinctly different from traditional 2D video. To date, video quality assessment (VQA) for omnidirectional video is still an open issue.

This talk contains two parts. The first part introduces a spatio�temporal modeling approach. In this approach, we firstly construct a spatio�temporal quality assessment unit to evaluate the average distortion in temporal dimension at the eye fixation level. Then, we give a detailed solution of how to integrate three existing spatial VQA metrics into this approach. Besides, cross-format omnidirectional video distortion measurement is also discussed. Based on the modeling approach, a full reference objective quality assessment metric for omnidirectional video is derived, namely OV-PSNR. Experimental results show that OV-PSNR greatly improves the prediction performance of existing VQA metrics for omnidirectional video.

The second part of this talk will introduce our attempt to using deep learning for blind omnidirectional image quality assessment. We will first talk about the challenges currently faced in this filed, and then provide the details about our proposed model BOIQA. Our model contains two-stage training: one is to pre-train the model to obtain an objective error map with reference image, and the other one is to train the model to predict the score with the inferenced objective error map, where we employ a spatial weight map as a prior to predict human sensitivity. Finally, we show the performance of our BOIQA on the datasets CVIQ and OIQA.

More Like This