Skip to main content

LightGrad: Lightweight Diffusion Probabilistic Model for Text-to-Speech

Jie Chen (Shenzhen International Graduate School, Tsinghua University); Xingchen Song (Horizon Robotics, Beijing, China); Zhendong Peng ( Horizon Robotics, Beijing, China); Binbin Zhang ( Horizon Robotics, Beijing, China); Fuping Pan ( Horizon Robotics, Beijing, China); Zhiyong Wu (Tsinghua University)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Recent advances in neural text-to-speech (TTS) models bring thousands of TTS applications into daily life, where models are deployed in cloud to provide services for customs. Among these models are diffusion probabilistic models (DPMs), which can be stably trained and are more parameter-efficient compared with other generative models. As transmitting data between customs and the cloud introduces high latency and the risk of exposing private data, deploying TTS models on edge devices is preferred. When implementing DPMs onto edge devices, there are two practical problems. First, current DPMs are not lightweight enough for resource-constrained devices. Second, DPMs require many denoising steps in inference, which increases latency. In this work, we present LightGrad, a lightweight DPM for TTS. LightGrad is equipped with a lightweight U-Net diffusion decoder and a training-free fast sampling technique, reducing both model parameters and inference latency. Streaming inference is also implemented in LightGrad to reduce latency further. Compared with Grad-TTS, LightGrad achieves 62.2% reduction in paramters, 65.7% reduction in latency, while preserving comparable speech quality on both Chinese Mandarin and English in 4 denoising steps\footnote{Demos and code are available at: https://thuhcsi.github.io/LightGrad/}.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00