NVlabs / few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When will the code be released?

MengXinChengXuYuan opened this issue · comments

Great work!!!!
Any timeline to release the code?

Great work!!!!
Any timeline to release the code?

same with you,i am looking forward to this code.

The code is ready for release, but we're still waiting for lawyers to resolve some legal issues. Once it's approved we can release it.

Are you planning on releasing a pre-trained model?

无比期待。Very much looking forward to……!

I'm looking forward to!!!!

I'm looking forward to!!!!My baba.

question first, what's the mini-size of GPU does this work need?

Hi I'm currently trying to re-implement a simplfied version of your work, used to generate new view of static faces, like discribed in paper Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

Instead of using adaIN as mentioned in the aboved paper, I tried to use origin spade first (I planned to update it as discribed in your paper with learnable weights mlp_shared/mlp_gamma/mlp_beta if the result of origin spade is good), however the result for unseen data couldn't be worse.....

My question is that did you use fully origin official implementation of spade or you modified it?
Especially what type of norm did you use in the generator? synbatchnorm as default, batchnorm or instance norm?

Do you have any estimate timeline for releasing code.

The code is now released.