Maclory / Deep-Iterative-Collaboration

Pytorch implementation of Deep Face Super-Resolution with Iterative Collaboration between Attentive Recovery and Landmark Estimation (CVPR 2020)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

tensorflow version

Yakuho opened this issue · comments

Thanks for your excellent work.Do you have tensorflow verision about this code?

Sorry, we only have the code of pytorch version.

ok, thanks. if I want to use LFW and HElEN dataset to train model. What should I do for it.

You can follow the instructions in README.md, issue#10 and issue#21.

thanks!!!

Do you have communication groups? such qq?

It's my pleasure. We don't have communication groups. We can communicate here or by emails:-)

what size image should be set(such Face with background or only face, I found that when I used images containing only faces to get landmarks, OpenFace failed to detect the box a lot of the time), when i get the landmark?? :-)

In my opinion, OpenFace is a powerful tool and you can process images with background.

作者你好,我已成功标注好训练集,当我尝试训练的时候它抛出了显存溢出的错误,我是使用的Colab实验室训练的模型(显存有15Gb),batch_size=8等配置都是使用默认的。请问您们在训练的时候,设置的batch_size是多大呢?因为我注意到论文提到您们使用的显卡为2080ti。感谢作者 :-)

2080ti显卡显存是11GB,用默认设置应该都是可以正常运行的

你们使用的是双卡吗,因为我注意到默认的device配置是[0, 1]。

对,你可以调低batchsize试一下

我已经调成4了, 已经成功运行。我就是存在一个疑问,colab的显卡是tesla t4 实际显存15G, 单显卡模式,无法运行你们默认的配置。QAQ(哭泣)

嗯嗯,可以看看效果怎么样

感谢您百忙之中的回复 👍

不客气:-)

早上好,又打扰您了。由于我在colab上训练您的模型,colab的规则是12小时断一次,那么我需要在哪里去配置我的相关东西去继续我的训练呢?

找一下resume相关的配置就可以了

你好,我在使用CelebA的pretrain_HG尝试迁移学习训练DIC_Helen的时候,发现在21K step的时候align loss突然猛降,但在tensorboard中看到landmark的点变得混乱了,不再是依照人脸的轮廓(在这之前可以依照轮廓标点), 这是什么原因呢?或者请问你能给我一些建议吗。谢谢。