How Far Are We From Robust Voice Conversion: A Survey
Tzu-hsien Huang, Jheng-hao Lin, Hung-yi Lee
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 0:15:03
Voice conversion technologies have been greatly improved in recent years with the help of deep learning, but their capabilities of producing natural sounding utterances in different conditions remain unclear. In this paper, we gave a thorough study of the robustness of known VC models. We also modified these models, such as the replacement of speaker embeddings, to further improve their performances. We found that the sampling rate and audio duration greatly influence voice conversion. All the VC models suffer from unseen data, but AdaIN-VC is relatively more robust. Also, the speaker embedding jointly trained is more suitable for voice conversion than those trained on speaker identification.