Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:58
04 May 2020

Neural network language models have gained considerable popularity due to their promising performance. Distributed word embeddings are utilized to represent semantic information. However, each word is associated with a single vector in the embedding layer, disabling the model from capturing the meanings of polysemous words. In this work, we address this problem by assigning multiple fine-grained sense embeddings to each word in the embedding layers. The proposed model discriminates among different senses of a word with attention mechanism in an unsupervised manner. Experiments demonstrate the benefits of our approach in language modeling and ASR rescoring. Investigations are also made on standard word similarity tasks. The results indicate that our proposed method is efficient in modeling polysemy and therefore obtains better word representations.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00