REPEAT AFTER ME: SELF-SUPERVISED LEARNING OF ACOUSTIC-TO-ARTICULATORY MAPPING BY VOCAL IMITATION
Marc-Antoine Georges, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber, Julien Diard
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:50
We propose a computational model of speech production combining a pre-trained neural articulatory synthesizer able to reproduce complex speech stimuli from a limited set of interpretable articulatory parameters, a DNN-based internal forward model predicting the sensory consequences of articulatory commands, and an internal inverse model based on a recurrent neural network recovering articulatory commands from the acoustic speech input. Both forward and inverse models are jointly trained in a self-supervised way from raw acoustic-only speech data from different speakers. The imitation simulations are evaluated objectively and subjectively and display quite encouraging performances.