One-Shot Voice Conversion Using Star-Gan
Ruobai Wang, Yu Ding, Lincheng Li, Changjie Fan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:25
Our efforts are made on one-shot voice conversion where the target speaker is unseen in training dataset or both source and target speakers are unseen in the training dataset. In our work, StarGAN is employed to carry out voice conversion between speakers. An embedding vector is used to represent speaker ID. This work relies on two datasets in English and one dataset in Chinese, involving 38 speakers. A user study is conducted to validate our framework in terms of reconstruction quality and conversion quality. The results show that our framework is able to perform one-shot voice conversion and also outperforms state-of-the-art methods when the speaker in the test is seen in the training dataset. The exploration experiment demonstrates that our framework can be updated with incremental training when the data from new speakers is available.