Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:15:02
09 Jun 2021

In this paper, we consider the problem of Federated Learning (FL) under non-i.i.d data setting. We provide an improved estimate of the empirical loss at each node by using a weighted average of losses across nodes with a penalty term. These uneven weights to different nodes are assigned by taking a novel Bayesian approach to the problem where the problem of learning for each device/node is cast as maximizing the likelihood of a joint distribution. This joint distribution is for losses of nodes obtained by using data across devices for a given neural network of a node. We then provide a PAC learning guarantee on the objective function which reveals that the true average risk is no more than the proposed objective and the error term. We leverage this guarantee to propose an algorithm called Omni-Fedge. Using MNIST and Fashion MNIST data-sets, we show that the performance of the proposed algorithm is significantly better than existing algorithms.

Chairs:
Tao Zhang

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free