Skip to yearly menu bar Skip to main content


Poster

A Sublinear Adversarial Training Algorithm

Yeqi Gao · Lianke Qin · Zhao Song · Yitan Wang

Halle B #113
[ ]
Wed 8 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract: Adversarial training is a widely used strategy for making neural networks resistant to adversarial perturbations. For a neural network of width $m$, $n$ input training data in $d$ dimension, it takes $\Omega(mnd)$ time cost per training iteration for the forward and backward computation. In this paper we analyze the convergence guarantee of adversarial training procedure on a two-layer neural network with shifted ReLU activation, and shows that only $o(m)$ neurons will be activated for each input data per iteration. Furthermore, we develop an algorithm for adversarial training with time cost $o(m n d)$ per iteration by applying half-space reporting data structure.

Live content is unavailable. Log in and register to view live content