Skip to main content

Social Learning Under Inferential Attacks

Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:14:16
11 Jun 2021

A common assumption in the social learning literature is that agents exchange information in an unselfish manner. In this work, we consider the scenario where a subset of agents aims at driving the net-work beliefs to the wrong hypothesis. The adversaries are unaware of the true hypothesis. However, they will “blend in" by behaving similarly to the other agents and will manipulate the likelihood functions used in the belief update process to launch inferential attacks. We will characterize the conditions under which the network is misled. Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose. We examine both situations in which the agents have minimal or no information about the network model.

Chairs:
Vikram Krishnamurthy

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00