Word Similarity Based Label Smoothing In Rnnlm Training For Asr
Minguang Song, Yunxin Zhao, Shaojun Wang, Mei Han
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 0:13:50
Label smoothing has been shown as an effective regularization approach for deep neural networks. Recently, a context-sensitive label smoothing approach was proposed for training RNNLMs that improved word error rates on speech recognition tasks. Despite the performance gains, its plausible candidate words for label smoothing were confined to $n$-grams observed in training data. To investigate the potential of label smoothing in model training with insufficient data, in this current work, we propose to utilize the similarity between word embeddings to build a candidate word set for each target word, where by doing so, plausible words outside the n-grams in training data may be found and introduced into candidate word sets for label smoothing. Moreover, we propose to combine the smoothing labels from the n-gram based and the word similarity based methods to improve the generalization capability of RNNLMs. Our proposed approach to RNNLM training has been evaluated for n-best list rescoring on speech recognition tasks of WSJ and AMI, with improved experimental results on word error rates confirming its effectiveness.