Skip to yearly menu bar Skip to main content


Poster

Variance Networks: When Expectation Does Not Meet Your Expectations

Kirill Neklyudov · Dmitry Molchanov · Arsenii Ashukha · Dmitry P. Vetrov

Great Hall BC #72

Keywords: [ variational dropout ] [ variational inference ] [ deep learning ]


Abstract:

Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. It means that each object is represented by a zero-mean distribution in the space of the activations. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better training objective than more flexible conventional parameterizations where the mean is being learned.

Live content is unavailable. Log in and register to view live content