Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:10
11 May 2022

There are two types of methods for non-autoregressive text-to-speech models to learn the one-to-many relationship between text and speech effectively. The first one is to use an advanced generative framework such as normalizing flow (NF). The second one is to use variance information such as pitch or energy together when generating speech. For the second type, it is also possible to control the variance factors by adjusting the variance values provided to a model. In this paper, we propose a novel model called VarianceFlow combining the advantages of the two types. By modeling the variance with NF, VarianceFlow predicts the variance information more precisely with improved speech quality. Also, the objective function of NF makes the model use the variance information and the text in a disentangled manner resulting in more precise variance control. In experiments, VarianceFlow shows superior performance over other state-of-the-art TTS models both in terms of speech quality and controllability.

More Like This

18 Oct 2024

Tutorial: Deep Generative Model for Inference

3.00 pdh 0.30 ceu
  • SPS
    Members: $49.00
    IEEE Members: $59.00
    Non-members: $69.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00