From Gradient Leakage To Adversarial Attacks In Federated Learning
Jia Qi Lim, Chee Seng Chan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:31
Deep neural networks (DNN) are widely used in real-life applications despite the lack of understanding on this technology and its challenges. Data privacy is one of the bottlenecks that is yet to be overcome and more challenges in DNN arise when researchers start to pay more attention to DNN vulnerabilities. In this work, we aim to cast the doubts towards the reliability of the DNN with solid evidence particularly in Federated Learning environment by utilizing an existing privacy breaking algorithm which inverts gradients of models to reconstruct the input data. By performing the attack algorithm, we exemplify the data reconstructed from inverting gradients algorithm as a potential threat and further reveal the vulnerabilities of models in representation learning. Pytorch implementation are provided at https://github.com/Jiaqi0602/adversarial-attack-from-leakage/