upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Bayesian Learning via Stochastic Gradient Langevin Dynamics

  • Paper
  • 2021
  • #Neuroscience #ComputerScience #Statistics
Yee Whye Teh
@yeewhye
(Author)
Max Welling
@wellingmax
(Author)
www.stats.ox.ac.uk
Read on www.stats.ox.ac.uk
1 Recommender
1 Mention
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standar... Show More

In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and
Bayesian posterior sampling provides an inbuilt protection against overfitting. We also
propose a practical method for Monte Carlo
estimates of posterior statistics which monitors a “sampling threshold” and collects samples after it has been surpassed. We apply the method to three models: a mixture of Gaussians, logistic regression and ICA with
natural gradients.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Sebastian Farquhar @seb_far · Jul 19, 2021
  • Post
  • From Twitter
This is a well-deserved test-of-time award. A quietly brilliant paper with an impact on how we think about deep learning that goes beyond the papers that cite it directly.
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta