Skip to main content

FEDERATED SELF-TRAINING FOR DATA-EFFICIENT AUDIO RECOGNITION

Vasileios Tsouvalas, Tanir Ozcelebi, Aaqib Saeed

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:16:05
10 May 2022

Federated Learning is a distributed machine learning paradigm dealing with decentralized and personal datasets. Since data reside on devices like smartphones, labeling is entrusted to the clients or labels are extracted in an automated way. Specifically, in the case of audio data, acquiring semantic annotations can be prohibitively expensive and time-consuming. As a result, an abundance of audio data remains unlabeled and unexploited on users? devices. Existing federated learning approaches largely focus on supervised learning without harnessing the unlabeled data. Here, we study the problem of semi-supervised learning of audio models in conjunction with federated learning. We propose FedSTAR , a self-training approach to exploit large-scale on-device unlabeled data to improve the generalization of audio recognition models. We conduct experiments on diverse public audio classification datasets and investigate the performance of our models under varying percentages of labeled data and show that with as little as 3% labeled data, FedSTAR on average can improve the recognition rate by 13.28% compared to the fully-supervised federated model.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free