Basic Design Approaches to Accelerating Deep Neural Networks Video
Rangharajan Venkatesan
-
SSCS
IEEE Members: $25.00
Non-members: $40.00Length: 1:43:34
Abstract: Deep neural networks are used across a wide range of applications. Custom hardware optimizations for this field offer significant performance and power advantages compared to general-purpose processors. However, achieving high TOPS/W and/or TOPS/mm2 along with the requirements for scalability and programmability is a challenging task. This tutorial presents various design approaches to strike the right balance between efficiency, scalability, and flexibility across different neural networks and towards new models. It presents a survey of (i) different circuits and architecture techniques to design efficient compute units, memory hierarchies, and interconnect topologies, (ii) compiler approaches to effectively tile computations, and (iii) neural network optimizations for efficient execution on the target hardware.