Skip to yearly menu bar Skip to main content


Spotlights Session 1
in
Workshop: S2D-OLAD: From shallow to deep, overcoming limited and adverse data

On Adversarial Robustness: A Neural Architecture Search perspective

Chaitanya Devaguptapu


Abstract:

"Adversarial robustness of deep learning models has gained much traction in the last few years. While a lot of approaches have been proposed to improve adversarial robustness, one promising direction for improving adversarial robustness is un-explored, i.e., the complex topology of the neural network architecture. In this work, we empirically understand the effect of architecture on adversarial robustness by experimenting with different hand-crafted and NAS based architectures. Our findings show that, for small-scale attacks, NAS-based architectures are more robust for small-scale datasets and simple tasks than hand-crafted architectures. However, as the dataset's size or the task's complexity increase, hand-crafted architectures are more robust than NAS-based architectures. We perform the first large scale study to understand adversarial robustness purely from an \textit{architectural perspective}. Our results show that random sampling in the search space of DARTS (a popular NAS method) with simple ensembling can improve the robustness to PGD attack by nearly ~12\%. We show that NAS, which is popular for SoTA accuracy, can provide adversarial accuracy as a \textit{free add-on} without any form of adversarial training. We also introduce a metric that can be used to calculate the trade-off between clean accuracy and adversarial robustness. "