Adversarial Training in Federated Learning using Constrained Optimization Methods
J. König. University of Würzburg, Master Thesis, (May 2023)
Abstract
Federated Learning (FL) is an approach to machine learning that facilitates the collaborative training of models and allows participants to keep their local and potentially sensitive
data private. Due to its decentralized nature, FL is especially vulnerable to certain types
of attacks. In a so-called backdoor attack, adversaries submit manipulated updates to
the model aggregation process. For adversary-determined inputs, the newly aggregated
model will then generate targeted false predictions. A common defense so far has been
to measure the deviation between a global model and the model trained by a participant
and reject models with a deviation above a certain threshold. Adversaries can counteract
by adjusting their training objective in a way that penalizes large deviations. However,
this merely encourages and not guarantees that the deviation threshold is not exceeded.
As the main contribution, this thesis reformulates the adversarial training objective as a
constrained optimization problem, a class of problems well-researched in mathematics that
often can be solved by the Augmented Lagrangian Method. This eliminates the dilemma
that adversaries face when having to decide between the effectiveness of their backdoor
and remaining undetected, as is the case with current methods. A secondary, independent contribution is a scheme to detect FL participants training a hidden objective, such
as a backdoor, for linear approximations of non-linear models (Neural Tangent Kernels).
Finally, to evaluate the effectiveness of the proposed attack- and defense mechanisms,
this thesis evaluates them against several existing state-of-the-art backdoor types (e.g.,
Semantic backdoor).
%0 Thesis
%1 jan2023adversarial
%A König, Jan
%D 2023
%I Master Thesis
%K sss-group thesis_supervised_by_SSS_member
%T Adversarial Training in Federated Learning using Constrained Optimization Methods
%X Federated Learning (FL) is an approach to machine learning that facilitates the collaborative training of models and allows participants to keep their local and potentially sensitive
data private. Due to its decentralized nature, FL is especially vulnerable to certain types
of attacks. In a so-called backdoor attack, adversaries submit manipulated updates to
the model aggregation process. For adversary-determined inputs, the newly aggregated
model will then generate targeted false predictions. A common defense so far has been
to measure the deviation between a global model and the model trained by a participant
and reject models with a deviation above a certain threshold. Adversaries can counteract
by adjusting their training objective in a way that penalizes large deviations. However,
this merely encourages and not guarantees that the deviation threshold is not exceeded.
As the main contribution, this thesis reformulates the adversarial training objective as a
constrained optimization problem, a class of problems well-researched in mathematics that
often can be solved by the Augmented Lagrangian Method. This eliminates the dilemma
that adversaries face when having to decide between the effectiveness of their backdoor
and remaining undetected, as is the case with current methods. A secondary, independent contribution is a scheme to detect FL participants training a hidden objective, such
as a backdoor, for linear approximations of non-linear models (Neural Tangent Kernels).
Finally, to evaluate the effectiveness of the proposed attack- and defense mechanisms,
this thesis evaluates them against several existing state-of-the-art backdoor types (e.g.,
Semantic backdoor).
@mastersthesis{jan2023adversarial,
abstract = {Federated Learning (FL) is an approach to machine learning that facilitates the collaborative training of models and allows participants to keep their local and potentially sensitive
data private. Due to its decentralized nature, FL is especially vulnerable to certain types
of attacks. In a so-called backdoor attack, adversaries submit manipulated updates to
the model aggregation process. For adversary-determined inputs, the newly aggregated
model will then generate targeted false predictions. A common defense so far has been
to measure the deviation between a global model and the model trained by a participant
and reject models with a deviation above a certain threshold. Adversaries can counteract
by adjusting their training objective in a way that penalizes large deviations. However,
this merely encourages and not guarantees that the deviation threshold is not exceeded.
As the main contribution, this thesis reformulates the adversarial training objective as a
constrained optimization problem, a class of problems well-researched in mathematics that
often can be solved by the Augmented Lagrangian Method. This eliminates the dilemma
that adversaries face when having to decide between the effectiveness of their backdoor
and remaining undetected, as is the case with current methods. A secondary, independent contribution is a scheme to detect FL participants training a hidden objective, such
as a backdoor, for linear approximations of non-linear models (Neural Tangent Kernels).
Finally, to evaluate the effectiveness of the proposed attack- and defense mechanisms,
this thesis evaluates them against several existing state-of-the-art backdoor types (e.g.,
Semantic backdoor).},
added-at = {2023-07-22T20:20:20.000+0200},
author = {König, Jan},
biburl = {https://www.bibsonomy.org/bibtex/217291a90c6500970d53715d6c7139d5c/sssgroup},
interhash = {c5a0181163d0fa0e58ad3470834ef5e8},
intrahash = {17291a90c6500970d53715d6c7139d5c},
keywords = {sss-group thesis_supervised_by_SSS_member},
month = may,
publisher = {Master Thesis},
school = {University of Würzburg},
timestamp = {2024-10-16T11:23:14.000+0200},
title = {Adversarial Training in Federated Learning using Constrained Optimization Methods},
type = {Master Thesis},
year = 2023
}