Neuro-inspired Deep Neural Networks With Sparse, Strong Activations
Metehan Cekic, Can Bakiskan, Upamanyu Madhow
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:06
Video Enhancement is an important computer vision task aiming at the removal of the artifacts from a lossy compressed video and the improvement of the visual properties by a photo-realistic restoration of the video contents. Decades of research produced a multitude of efficient algorithms, enabling the reduction of the memory footprint of the transferred video contents in a contiguously increasing network of video streaming services. in this work, we propose VETRAN - a low latency real-time online Video Enhancement TRANsformer based on spatial and temporal attention mechanisms. We validate our method on recent Video Enhancement NTIRE and AIM challenge benchmarks, i.e. REDS/REDS4, LDV, and intVID. We improve over the compared state-of-the-art methods both quantitatively and qualitatively, while maintaining a low inference time.