Skip to main content

MULTI-MODAL PRE-TRAINING FOR AUTOMATED SPEECH RECOGNITION

David Chan, Shalini Ghosh, Debmalya Chakrabarty, Björn Hoffmeister

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:57
08 May 2022

Traditionally, research in automated speech recognition has focused on local-first encoding of audio representations to predict the spoken phonemes in an utterance. Unfortunately, approaches relying on such hyper-local information tend to be vulnerable to both local-level corruption (such as audio-frame drops, or loud noises) and global-level noise (such as environmental noise, or background noise) that has not been seen during training. In this work, we introduce a novel approach that leverages a self-supervised learning technique based on masked language modeling to compute a global, multi-modal encoding of the environment in which the utterance occurs. We then use a new deep-fusion framework to integrate this global context into a traditional ASR method, and demonstrate that the resulting method can outperform baseline methods by up to 7% on Librispeech; gains on internal datasets range from 6% (on larger models) to 45% (on smaller models).

More Like This

  • IAS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $450.00
  • IAS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $450.00
  • MTT
    Members: Free
    IEEE Members: $9.00
    Non-members: $14.00