upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations

  • Paper
  • Feb 10, 2020
  • #MachineLearning
Sebastian Farquhar
@seb_far
(Author)
Yarin Gal
@yaringal
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the ca... Show More

We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks. We prove several results indicating that deep mean-field variational weight posteriors can induce similar distributions in function-space to those induced by shallower networks with complex weight posteriors. We validate our theoretical contributions empirically, both through examination of the weight posterior using Hamiltonian Monte Carlo in small models and by comparing diagonal- to structured-covariance in large settings. Since complex variational posteriors are often expensive and cumbersome to implement, our results suggest that using mean-field variational inference in a deeper model is both a practical and theoretically justified alternative to structured approximations.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Tim G. J. Rudner (sigmoid.social/@timrudner) @timrudner · Dec 9, 2020
  • Post
  • From Twitter
Check out this fantastic paper by @seb_far, Lewis Smith and @yaringal! Really exciting work! @OATML_Oxford #NeurIPS2020
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta