Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Finding Actual Descent Directions for Adversarial Training

Fabian Latorre · Igor Krawczuk · Leello Dadi · Thomas Pethick · Volkan Cevher

MH1-2-3-4 #127

Keywords: [ Optimization ] [ adversarial training ] [ adversarial examples ] [ robustness ] [ non-convex optimization ]


Abstract:

Adversarial Training using a strong first-order adversary (PGD) is the gold standard for training Deep Neural Networks that are robust to adversarial examples. We show that, contrary to the general understanding of the method, the gradient at an optimal adversarial example may increase, rather than decrease, the adversarially robust loss. This holds independently of the learning rate. More precisely, we provide a counterexample to a corollary of Danskin's Theorem presented in the seminal paper of Madry et al. (2018) which states that a solution of the inner maximization problem can yield a descent direction for the adversarially robust loss. Based on a correct interpretation of Danskin's Theorem, we propose Danskin's Descent Direction (DDi) and we verify experimentally that it provides better directions than those obtained by a PGD adversary. Using the CIFAR10 dataset we further provide a real world example showing that our method achieves a steeper increase in robustness levels in the early stages of training, and is more stable than the PGD baseline. As a limitation, PGD training of ReLU+BatchNorm networks still performs better, but current theory is unable to explain this.

Chat is not available.