Skip to main content

Dian: Duration Informed Auto-Regressive Network For Voice Cloning

Wei Song, Xin Yuan, Zhengchen Zhang, Chao Zhang, Youzheng Wu, Xiaodong He, Bowen Zhou

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:08
07 Jun 2021

In this paper, we propose a novel end-to-end speech synthesis approach, Duration Informed Auto-regressive Network (DIAN), which consists of an acoustic model and a separate duration model. Unlike other auto-regressive TTS methods, the duration information of phonemes is provided as part of the input to the acoustic model, which enables the removal of the attention mechanism between its encoder and decoder parts. This eliminates the common seen skipping and repeating issues and improves speech intelligibility while ensuring high speech quality. A Transformer-based duration model is used to predict the phoneme duration for the attention-free acoustic model. We developed our TTS systems for the M2VoC using the proposed DIAN approach. In our procedure, a multi-speaker attention-free acoustic model and its Transformer-based duration model are first separately trained based on the training data released by M2VoC. Next, the multi-speaker models are adapted to form the speaker-specific models with the speaker-dependent data and transfer learning. At last, a speaker-specific LPCNet is estimated and used to synthesize the speech of the corresponding speaker. The M2VoC results showed that our proposed approach achieved the 3rd-place in the speech quality ranking and the 4th-place in the speaker similarity and style similarity ranking in the Track1-a task.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00