bryandlee / animegan2-pytorch

PyTorch implementation of AnimeGANv2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training Anime like ARCANE (Netflix)

enzyme69 opened this issue · comments

Quick question:
Is it possible to train with new style like from Netflix style animation "ARCANE" I really love their rendering of face.

If possible, is it hard, does it take a long time using M1?

It would be cool indeed. I think Face Portrait v2 is close to this.
I will try to train with my own dataset and will let you know if I succeed with ARCANE pictures

commented

@Greg8978 Have you trained the model on your own dataset and what kind of dataset is it, facial animation? Looking forward for your update.

Hey there!

I collect some images directly from the show.
I try to train the model but it took 187 hours on my ubuntu.
I did not take the time yet to make the setup on my windows computer to use GPU for trainning.

commented

Hey there!

I collect some images directly from the show. I try to train the model but it took 187 hours on my ubuntu. I did not take the time yet to make the setup on my windows computer to use GPU for trainning.

Would you mind sharing the dataset? I have enough GPU resources for training.

Sure, have a look here

@zhanglonghao1992, if you can share the training output it would make things easier for me ;)

commented

@Greg8978 I failed to train face stylization on the data set you provided. I guess the training style data should contain a large number of clear faces. At present, I plan to collect more face images in Arcane for training.

Ha ok, thanks for the feedback.

I'm training one but the results are not as good so far. will let you know if I get it work.
In the mean time, here are some super-cherry-picked golden samples:

arcane

any chance that you could release the current checkpoint? the results look amazing already!

commented

I'm training one but the results are not as good so far. will let you know if I get it work. In the mean time, here are some super-cherry-picked golden samples:

arcane

Did you do face alignment when training face stylization?

commented

Face Por

Thanks, your results looks fine enough. Could you please tell me which training code you use? I have another custom dataset and want to train it too.

commented

I've also given a try to this style. So far I've got results like this on images in the wild.
image

commented

I've also given a try to this style. So far I've got results like this on images in the wild. image

That's cool! Would you mind sharing you training datasets and strategies?

commented

I'm training one but the results are not as good so far. will let you know if I get it work. In the mean time, here are some super-cherry-picked golden samples:

arcane

Those are awesome! Is that a blend with the same Z or a projection?

I'm training one but the results are not as good so far. will let you know if I get it work. In the mean time, here are some super-cherry-picked golden samples:

arcane

Hi, how many pictures did you use to train stylegan model?

I'm training one but the results are not as good so far. will let you know if I get it work. In the mean time, here are some super-cherry-picked golden samples:

arcane

Amazing work! When I train stylegan model using screenshots in anime, there are always artifacts on the face. I am troubled by the lack of appropriate data sets. I wonder how many pictures did you use to train stylegan model? I would appreciate it if you could let me know. And if it is convenient, I would like to ask if you can release the data set?

I'm training one but the results are not as good so far. will let you know if I get it work. In the mean time, here are some super-cherry-picked golden samples:

arcane

Awesome! Is this a StyleGAN result then do a animegan training?

commented

Sure, have a look here

The transfer has expired, can you share it again?

It's easy to reproduce, just collect arcane video(s) (I used youtube trailers), then
ffmpeg -i Arcanevideo.mp4 fps=0.5 arcane%d.jpg

@bilal2vec These are super cherrypicked samples. The model's really fragile at the moment and have obvious normalization related artifacts for most of the images. I'm testing out some other techniques and trying to find a sweet spot between the quality and robustness.

@zhanglonghao1992 I didn't but it should help

@Sxela It's from a distilled pix2pix model

@rainsoulsrx @tinapan-pt I've used about 500 images but it contains many duplicates cause it was taken from videos with limited characters

@chenhk-chn yup

commented

@bryandlee Do you use pair data for training?

@bilal2vec These are super cherrypicked samples. The model's really fragile at the moment and have obvious normalization related artifacts for most of the images. I'm testing out some other techniques and trying to find a sweet spot between the quality and robustness.

@zhanglonghao1992 I didn't but it should help

@Sxela It's from a distilled pix2pix model

@rainsoulsrx @tinapan-pt I've used about 500 images but it contains many duplicates cause it was taken from videos with limited characters

@chenhk-chn yup

So besides these super cherrypicked samples, would you please share some normal examples?

@zhanglonghao1992 yup

@rainsoulsrx https://fragrant-chauffeur-53f.notion.site/Failures-d701a060e52046188b45823f56589093
It's pix2pixHD network (not the animegan architecture) with instance norm and I suspect that the black blobs are something similar to the "droplet artifact" in stylegan1.

@zhanglonghao1992 yup

@rainsoulsrx https://fragrant-chauffeur-53f.notion.site/Failures-d701a060e52046188b45823f56589093 It's pix2pixHD network with instance norm and I suspect that the black blobs are something similar to the "droplet artifact" in stylegan1.

Still not bad~
However in my experiment, I find it's hard to retain the 'style' when I train stylegan with less epoch, on the other hand, if I train stylegan more epoch, the 'style' become stronger, but I got more atifacts which will cause unpleasant data pair. just like below. So how do you make this balance?

image

commented

@zhanglonghao1992 yup

@rainsoulsrx https://fragrant-chauffeur-53f.notion.site/Failures-d701a060e52046188b45823f56589093 It's pix2pixHD network (not the animegan architecture) with instance norm and I suspect that the black blobs are something similar to the "droplet artifact" in stylegan1.

You can try batchnorm with spectral norm, seems to work okay for me

@Sxela Thanks for the suggestions. I've actually tried a number of architectures including BN/SN but didn't really get the quality that I wanted. +) I just found out that you've trained a nice model. If anyone interested in arcane style, check this out Sxela/ArcaneGAN

commented

@Sxela Thanks for the suggestions. I've actually tried a number of architectures including BN/SN but didn't really get the quality that I wanted. +) I just found out that you've trained a nice model. If anyone interested in arcane style, check this out Sxela/ArcaneGAN

Thank you for mentioning, it means a lot to me! Can't wait to see your model from DeepStudio, it looks much better in my opinion, its temporal consistency is just outstanding.

@Sxela No problem! Great work man

@zhanglonghao1992是的

@rainsoulsrx https://fragrant-chauffeur-53f.notion.site/Failures-d701a060e52046188b45823f56589093 这是具有实例规范的 pix2pixHD 网络(不是 animegan 架构),我怀疑黑色斑点类似于 stylegan1 中的“液滴工件”。

You do not have access to Bryan's Notion. Please contact an admin to add you as a member.