gpapamak / maf

Masked Autoregressive Flow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error in log likelihood computation

vidn opened this issue · comments

commented

self.L = -0.5 * (n_inputs * np.log(2 * np.pi) + tt.sum(self.u ** 2 - self.logp, axis=1))

The tf.sum term should be tf.sum(self.u ** 2 * tf.exp(self.logp) - self.logp))

No, your suggestion is incorrect. self.u ** 2 already includes the scaling by tt.exp(self.logp), since self.u is calculated by:

self.u = tt.exp(0.5 * self.logp) * (self.input - self.m)
commented

You are right. I missed that you are computing LL wrt to u...however, then you should not need the "-self.logp" term either because the u random variable should have unit variance (= zero log precision) by definition. logp should be the variance if you compute LL wrt to "x". This is what confused me in the first place...

Agree?

No, we are computing log likelihood with respect to x. Let me explain:

Let P(x) be a Gaussian with mean m and precision p. The log of P(x) is:

log(P(x)) = -0.5 * log(2*pi) + 0.5 * log(a) - 0.5 * p * (x-m)^2

Now define u = sqrt(p) * (x-m). You can rewrite log(P(x)) as:

log(P(x)) = -0.5 * log(2*pi) + 0.5 * log(a) - 0.5 * u^2

The above is precisely what the code computes.

One more point to help clarify things: as you correctly said, u follows a Gaussian distribution with mean 0 and precision 1. Call this distribution P(u). Its log is:

log(P(u)) = -0.5 * log(2*pi) - 0.5 * u^2

This means you can also write the log probability of x as:

log(P(x)) = log(P(u)) + 0.5 * log(a)

I hope that helps clarify things.

commented

Thanks very much Vidyut! I hope MAF will be useful in your application.

commented
commented