A Foveated Video Quality Assessment Model Using Space-Variant Natural Scene Statistics
Yize Jin, Todd Goodall, Anjul Patney, Richard Webb, Alan Bovik
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:14
In Virtual Reality (VR) systems, head mounted displays (HMDs) are widely used to present VR contents. When displaying immersive (360 degree video) scenes, greater challenges arise due to limitations of computing power, frame rate, and transmission bandwidth. To address these problems, a variety of foveated video compression and streaming methods have been proposed, which seek to exploit the nonuniform sampling density of the retinal photoreceptors and ganglion cells, which decreases rapidly with increasing eccentricity. Creating foveated immersive video content leads to the need for specialized foveated video quality pridictors. Here we propose a No-Reference (NR or blind) method which we call ``Space-Variant BRISQUE (SV-BRISQUE),'' which is based on a new space-variant natural scene statistics model. When tested on a large database of foveated, compression-distorted videos along with human opinions of them, we found that our new model algorithm achieves state of the art (SOTA) performance with correlation 0.88 / 0.90 (PLCC / SROCC) against human subjectivity.