openai / improved-gan

Code for the paper "Improved Techniques for Training GANs"

Home Page:https://arxiv.org/abs/1606.03498

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

nn.batch_norm

xinmei9322 opened this issue · comments

In train_mnist_feature_matching.py and other similar files, the generator uses nn.batch_norm, but it seems your implementation self.bn_updates = [(self.avg_batch_mean, new_m), (self.avg_batch_var, new_v)] is not updated in the train_mnist_feature_matching.py. (I saw the init_updates really got updated in the file.) I print the avg_batch_mean values, they are always 0 and the avg_batch_var values are always 1.
That means the batch normalization in your code does not normalize inputs when testing. Is it a bug? Thanks.