Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Backdoor Attacks and Defenses in Machine Learning

Unlearning Backdoor Attacks in Federated Learning

Chen Wu · SENCUN ZHU · Prasenjit Mitra


Abstract:

Backdoor attacks are always a big threat to the federated learning system. Substantial progress has been made to mitigate such attacks during or after the training process. However, how to remove a potential attacker's contribution from the trained global model still remains an open problem. Towards this end, we propose a federated unlearning method to eliminate an attacker's contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model's performance without introducing the backdoor. Our method can be broadly applied to different types of neural networks and does not rely on clients' participation. Thus, it is practical and efficient. Experiments on three canonical datasets demonstrate the effectiveness and efficiency of our method.

Chat is not available.