Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Fundamental limits on the robustness of image classifiers

Zheng Dai · David Gifford

MH1-2-3-4 #159

Keywords: [ Theory ] [ computer vision ] [ theory ] [ Isoperimetry ]


Abstract: We prove that image classifiers are fundamentally sensitive to small perturbations in their inputs. Specifically, we show that given some image space of $n$-by-$n$ images, all but a tiny fraction of images in any image class induced over that space can be moved outside that class by adding some perturbation whose $p$-norm is $O(n^{1/\max{(p,1)}})$, as long as that image class takes up at most half of the image space. We then show that $O(n^{1/\max{(p,1)}})$ is asymptotically optimal. Finally, we show that an increase in the bit depth of the image space leads to a loss in robustness. We supplement our results with a discussion of their implications for vision systems.

Chat is not available.