Monte Carlo Gradient Estimators and Variational Inference
19 Dec 2016
First, I’d like to say that I thoroughly enjoyed the the Advances in Approximate Bayesian Inference workshop at NIPS 2016 — great job Dustin Tran et al. An awesome poster (with a memorable name) from Geoffrey Roeder, Yuhuai Wu, and David Duvenaud probed an important, but typically undiscussed choice that practitioners have to make when doing blackbox variational inference with the pathwise gradient estimators^{1}. This post describes the phenomenon that they point out. I will try to provide some additional intuition through wordy prose and a numerical experiment on a simple example.
We use variational inference (VI) to approximate a posterior distribution, , with a tractable approximation, . To remain applicable to a general class of models, we often turn to Monte Carlo VI methods (e.g. blackbox VI or autodiff VI), where we estimate certain expectations with respect to using samples.
Users of VI have a choice: which Monte Carlo estimator of the ELBO should we use? We typically write the ELBO objective in a few standard ways^{2}
(i) KL form  
(ii) Entropy form  
(iii) Fully Monte Carlo (FMC) form 
where the functionals and are the KullbackLeibler divergence and the entropy, respectively.
Because these expressions all involve an expectation, we cannot guarantee that will be tractable for general . We sidestep this issue by approximating the objective (and its gradient) with samples from ; for instance, the entropy form approximation is computed
and the fully Monte Carlo form is computed
Both estimators, and , are random variables, seeded by the randomness originating from ; both estimators will have some variance (they are unbiased, so they will have the same mean).
Notice the subtle difference between the two — the entropy estimator computes the entropy in closed form (which is possible in the case of tractable distributions), whereas the full Monte Carlo estimator computes that term via Monte Carlo — recall that .
We might expect the KL or entropy forms, where a part of the expectation is analytically integrated out, to have lower variance when estimating with Monte Carlo samples — and that intuition is correct sometimes, but not all the time. When is flexible enough, and close to , then the randomness in is “canceled out” in each term in the sum
so when , the full Monte Carlo estimator has zero variance. In fact, we see that the KL and Entropy estimators will always have some irreducible variance from estimating the data term, even when we’ve accomplished our goal of . However, when , the FMC estimator can have much, much larger variance than the Entropy estimator.
Gradient Estimators
When optimizing, we care more about the variance of the gradient of the ELBO than the value of the ELBO itself. The pathwise gradient estimator uses the reparameterization trick to turn a Monte Carlo ELBO estimator into a Monte Carlo ELBO gradient estimator, , which we then use in a gradientbased optimization procedure. The variance in the gradient estimator will profoundly affect the (practical) speed of convergence of optimization.
The natural question becomes, what is the variance of the gradient estimators derived from the above ELBO forms? Roeder et al. look at the variance of the pathwise gradient estimator as applied to the fully Monte Carlo form. For a single sample ^{3} the pathwise gradient of the fully Monte Carlo estimator can be written
One thing tripped me up at first: the second term is a function of through two different arguments, and itself. This allows us to decompose the gradient into two components: (i) variation due to dependence through and (ii) dependence on directly through the probability density function . In fact, we can view the entire gradient through the lens of this decomposition
where the pathwise term accounts for variation via , and the score function term accounts for variation through the pdf of , which varies as a function of even when the first argument, , is held fixed.
This expression makes clear that the data term, , only varies as a function of through . So when we have a nearly perfect approximation, , their gradients with respect to are close, i.e.
When our approximation is almost there the pathwise component of the gradient is always close to zero for a sample . In this regime, the source of variance of the gradient estimator is the score function term
The score function has expectation zero — so we can simply remove (or scale) it. The question becomes: when should we reduce or remove the score function component of the pathwise gradient estimator?^{4}
Numerical Example
To get a sense of the variance of these gradient estimators, I used a dimensional Gaussian (with nontrival covariance) as the target distribution, and a in the same Gaussian family. Optimizing the ELBO with the pathwise gradient of the entropy estimator (note the noisy path near convergence) I measured the variance of each gradient component, , at each step of the optimization (for each the ~100 parameters).
The animation below compares the standard deviation (not variance) of three estimators
 pathwise entropy
 pathwise full monte carlo
 pathwise full monte carlo, removing the score function term (from Roeder et al.)
Each dot compares the standard deviation of a gradient component, where
 Blue dots compare the entropy estimator (yaxis) to the full Monte Carlo estimator (xaxis).
 Green dots compare the full Monte Carlo without the score function term (yaxis) to the full Monte Carlo estimator (xaxis).
Notice the progression — when the optimization starts out with very far from , the Entropy estimator provides (by far) the lowest variance estimates (across all components). As we reach convergence, the variance reduced FMC estimates shrink toward zero — once we’re within about 23 nats of the true distribution. Had I used the variance reduced FMC estimator at this point in the optimization, we probably would see a much faster variance decrease in the green dots.
In this scenario, the pathwise entropy estimator dominates the pathwise FMC estimator — we should never choose the FMC for gradients, unless we’re variancereducing them near the end of the optimization procedure.
Another interesting thing to note here is that the entropy estimator and full Monte Carlo estimator settle to essentially identitical component variances. This makes sense when and are close — the variance of the score function gradient component will be equal to the variance of the pathwise component that relies on .
This numerical experiment suggests finding some tradeoff that gradually removes the score function component — that way the optimization procedure can enjoy the early benefits of the entropy estimator with the late benefits of the reduced variance. I’m also curious how the variance of pathwise estimators affects natural gradients, and consequently their convergence properties.
Looking forward to the full paper!
Hasta luego, Barcelona …

For a background, check out these slides from the Variational Inference tutorial (ctrlf “Pathwise Estimator”) ↩

Recall that the pathwise gradient estimator relies on a differentiable map, , that transforms some seed randomness, , such that is distributed according to . ↩

We can view this as adding a control variate, a common variance reduction technique. ↩