AlamiMejjati / Unsupervised-Attention-guided-Image-to-Image-Translation

Unsupervised Attention-Guided Image to Image Translation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

If I want to make attention in background.

deep0learning opened this issue · comments

Hi,
Thank you for this task. If I want to make attention in the background that I need to change rather than the object. For example, in domain A and B, we have horse images, when translating from domain A - B, I want to keep the same horse in domain B but the background will be changed as domain A. How I can do that? Thank you in advance.

I have the same question with you. In other word, how the Attention Network can output a mask (attention map) to keep eye on the foreground object in unsupervised setup? According to the paper, the network architecture of Generators and Attention Networks are almost same except the final activation function. When the final activation function is sigmoid with output channel is 1, output of the network is attention map. I don‘t know how that works.

Moreover, Figure 7 in the paper shows Attention Network can focus on foreground object in early of training. It is amazing! The losses are adversarial loss and cycle-consistency loss during early of training. There are no label information guides the Attention Network to focus on foreground object.

I am looking forward to discussing with you and the author @deep0learning @AlamiMejjati