rdevon / DIM

Deep InfoMax (DIM), or "Learning Deep Representations by Mutual Information Estimation and Maximization"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about prior matching

kosta-jo opened this issue · comments

Hi @rdevon, I have a question about the updating of the weights of the discriminator in prior matching. Updating weights of the discriminator to maximize the prior loss from the paper is done here, and Z_Q is detached so the weights of the encoder will not be updated.
But when you want to update the weights of the encoder to minimize the loss from the paper you that here and Q_samples are calculated using the discriminator in self.score function. So, I can't see how the weights of the discriminator do not minimize the loss from the paper in this case (which would be wrong, since the discriminator wants to maximize the loss)?

So it's not entirely opaque because of how cortex works (apologies as this was a framework I worked on a while ago and which I couldn't get enough help to support on). When the model "adds losses" as it does at the end of that routine function, those losses only apply to the parameters of the model specified in the keys used in that function. So when I say "self.add_losses(encoder=some_loss)", if some_loss depends on the parameters of some other network / model, those parameters wont change according to some_loss, unless I also say "discriminator=some_loss".

Hope that clears things up.

Great, thanks!