Importance sampling
❖ Learning Stochastic Feedforward Neural Networks!
❖ Lower bound training
❖ Log likelihood > - 181.2
Variational autoencoder
❖ Variational boundFast approximate posterior sampling!
❖ Log likelihood: - 95
-
Recognition Model
❖ Trained with high momentum!❖ Similar to Smoothed Gradient for Stochastic Variational Inference
Importance sampling with proposal distribution
❖ Use a learned proposal for the importance sampling!
❖ Lower bound training
❖ Marginally less collapsing!❖ Log likelihood: - 91.1