15
Training neural bayesian nets Laurent Dinh Vincent Dumoulin

Training neural - Accueil - Département d'informatique …bengioy/cifar/NCAP2014-summer...Lower bound training Log likelihood > - 181.2 Variational autoencoder Variational bound Fast

Embed Size (px)

Citation preview

Training neural bayesian nets

Laurent Dinh!Vincent Dumoulin

Directed graphical model

❖ Fast and unbiased sampling!❖ Explaining away:

difficult inference!❖

Importance sampling

❖ Learning Stochastic Feedforward Neural Networks!

❖ Lower bound training

❖ Log likelihood > - 181.2

Variational autoencoder

❖ Variational boundFast approximate posterior sampling!

❖ Log likelihood: - 95

-

Variational autoencoder

Recognition Model

❖ Trained with high momentum!❖ Similar to Smoothed Gradient for Stochastic Variational Inference

Component collapsing

Depth

Depth

Effects on the model

Effects on the model

Depth

Depth

X X

Importance sampling with proposal distribution

❖ Use a learned proposal for the importance sampling!

❖ Lower bound training

❖ Marginally less collapsing!❖ Log likelihood: - 91.1

Thank you for your attention