Saturday, April 15, 2017

History of Bayesian Neural Networks



This talk gives the history of neural networks in the framework of Bayesian inference. Deep learning is (so far) quite empirical in nature: things work, but we lack a good theoretical framework for understanding why or even how. The Bayesian approach offers some progress in these directions, and also toward quantifying prediction uncertainty.

I was sad to learn from this talk that David Mackay passed last year, from cancer. I recommended his book Information theory, inference and learning algorithms back in 2007.

Yarin Gal's dissertation Uncertainty in Deep Learning, mentioned in the talk.

I suppose I can thank my Caltech education for a quasi-subconscious understanding of neural nets despite never having worked on them. They were in the air when I was on campus, due to the presence of John Hopfield (he co-founded the Computation and Neural Systems PhD program at Caltech in 1986). See also Hopfield on physics and biology.

Amusingly, I discovered this talk via deep learning: YouTube's recommendation engine, powered by deep neural nets, suggested it to me this Saturday afternoon :-)

No comments:

Blog Archive

Labels