VITA-Group / EnlightenGAN

[IEEE TIP] "EnlightenGAN: Deep Light Enhancement without Paired Supervision" by Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about attention map

Icy-green opened this issue · comments

When I was executing the code, I noticed that A_Gray was the attention map mentioned in the paper, but A_Gray was not sent to the network for training. Could you please help me to answer the question?

attention map is used in here

x = self.bn1_1(self.LReLU1_1(self.conv1_1(torch.cat((input, gray), 1))))

commented

Although this line of code was present,it was not called because your which_model_netG selected 'unet_256' instead of 'sid_unet_resize'. Could you please help me to answer the question?