@sssgroup

Adversarial Training in Federated Learning using Constrained Optimization Methods

. University of Würzburg, Master Thesis, (May 2023)

Abstract

Federated Learning (FL) is an approach to machine learning that facilitates the collaborative training of models and allows participants to keep their local and potentially sensitive data private. Due to its decentralized nature, FL is especially vulnerable to certain types of attacks. In a so-called backdoor attack, adversaries submit manipulated updates to the model aggregation process. For adversary-determined inputs, the newly aggregated model will then generate targeted false predictions. A common defense so far has been to measure the deviation between a global model and the model trained by a participant and reject models with a deviation above a certain threshold. Adversaries can counteract by adjusting their training objective in a way that penalizes large deviations. However, this merely encourages and not guarantees that the deviation threshold is not exceeded. As the main contribution, this thesis reformulates the adversarial training objective as a constrained optimization problem, a class of problems well-researched in mathematics that often can be solved by the Augmented Lagrangian Method. This eliminates the dilemma that adversaries face when having to decide between the effectiveness of their backdoor and remaining undetected, as is the case with current methods. A secondary, independent contribution is a scheme to detect FL participants training a hidden objective, such as a backdoor, for linear approximations of non-linear models (Neural Tangent Kernels). Finally, to evaluate the effectiveness of the proposed attack- and defense mechanisms, this thesis evaluates them against several existing state-of-the-art backdoor types (e.g., Semantic backdoor).

Links and resources

Tags

community

  • @se-group
  • @sssgroup
@sssgroup's tags highlighted