orpatashnik / StyleCLIP

Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does global_pytorch support stylegan2-ada?

zsMoonshine opened this issue · comments

Hello, I put my own stylegan's pkl(generated using stylegan2-ada-pytorch) and e4e's pt (I transfomed stylegan2-ada's pkl to pt and train e4e with it.) instead of ffhq ones in global_pytorch's colab demo, and errors below occur when loading e4e's pt over the pSp framework from checkpoint.

RuntimeError: Error(s) in loading state_dict for Generator:
	Missing key(s) in state_dict: "style.1.weight", "style.1.bias", 
"style.2.weight", "style.2.bias", "style.3.weight", "style.3.bias", "style.4.weight", "style.4.bias", "style.5.weight", "style.5.bias", "style.6.weight", "style.6.bias", "style.7.weight", "style.7.bias","style.8.weight", "style.8.bias", "input.input", "conv1.conv.weight", "conv1.conv.modulation.weight", "conv1.conv.modulation.bias", "conv1.noise.weight", "conv1.activate.bias", "to_rgb1.bias", "to_rgb1.conv.weight", "to_rgb1.conv.modulation.weight", "to_rgb1.conv.modulation.bias", "convs.0.conv.weight", "convs.0.conv.blur.kernel", "convs.0.conv.modulation.weight", "convs.0.conv.modulation.bias", "convs.0.noise.weight", "convs.0.activate.bias", "convs.1.conv.weight", "convs.1.conv.modulation.weight", "convs.1.conv.modulation.bias", "convs.1.noise.weight", "convs.1.activate.bias", "convs.2.conv.weight", "convs.2.conv.blur.kernel", "convs.2.conv.modulation.weight", "convs.2.conv.modulation.bias", "convs.2.noise.weight", "convs.2.activate.bias", "convs.3.conv.weight", "convs.3.conv.modulation.weight", "convs.3.conv.modulation.bias", "convs.3.noise.weight", "convs.3.activate.bias", "convs.4.conv.weight", "convs.4.conv.blur.kernel", "convs.4.conv.modulation.weight", "convs.4.conv.modulation.bias", "convs.4.noise.weight", "convs.4.activate.bias", "convs.5.conv.weight", "convs.5.conv.modulation.weight", "convs.5.conv.modulation.bias", "convs.5.noise.weight", "convs.5.activate.bias", "convs.6.conv.weight", "convs.6.co...
	Unexpected key(s) in state_dict: "synthesis.b4.const", "synthesis.b4.resample_filter", "synthesis.b4.conv1.weight", "synthesis.b4.conv1.noise_strength", "synthesis.b4.conv1.bias", "synthesis.b4.conv1.resample_filter", "synthesis.b4.conv1.noise_const", "synthesis.b4.conv1.affine.weight", "synthesis.b4.conv1.affine.bias", 
"synthesis.b4.torgb.weight", "synthesis.b4.torgb.bias", "synthesis.b4.torgb.affine.weight", "synthesis.b4.torgb.affine.bias", "synthesis.b8.resample_filter", "synthesis.b8.conv0.weight", "synthesis.b8.conv0.noise_strength", "synthesis.b8.conv0.bias", "synthesis.b8.conv0.resample_filter", "synthesis.b8.conv0.noise_const", "synthesis.b8.conv0.affine.weight", "synthesis.b8.conv0.affine.bias", "synthesis.b8.conv1.weight", "synthesis.b8.conv1.noise_strength", "synthesis.b8.conv1.bias", "synthesis.b8.conv1.resample_filter", "synthesis.b8.conv1.noise_const", "synthesis.b8.conv1.affine.weight", "synthesis.b8.conv1.affine.bias", "synthesis.b8.torgb.weight", "synthesis.b8.torgb.bias", "synthesis.b8.torgb.affine.weight", "synthesis.b8.torgb.affine.bias", "synthesis.b16.resample_filter", "synthesis.b16.conv0.weight", "synthesis.b16.conv0.noise_strength", "synthesis.b16.conv0.bias", "synthesis.b16.conv0.resample_filter", "synthesis.b16.conv0.noise_const", "synthesis.b16.conv0.affine.weight", "synthesis.b16.conv0.affine.bias", "synthesis.b16.conv1.weight", "synthesis.b16.conv1.noise_strength", "synthesis.b16.conv1.bias", "synthesis.b16.conv1.resample_filter", "synthesis...

Yes, the implementation supports stylegan2-ada.

Does the error happen when you load the e4e checkpoint to the styleclip-torch codes? Or does the error happen when you load the checkpoint to e4e? I do not fully understand the situation.

Thanks for reply. I think the error happens when loading the e4e checkpoint to e4e. The e4e checkpoint was trained with my own dataset using e4e, not ffhq.
Here is a screenshot of error. And the error points out the keys of weight dict are not right.
image

It is an error related to e4e or psp. What is the resolution of your new Stylegan model? It is not 1024x1024, right?

Please change the opts that input to

net = pSp(opts)

through

opts.stylegan_size=1024 # change this value 

Thanks. I've tried as you say, but the problem remains the same.
I think the error happens due to the difference between stylegan2 and stylegan2-ada, because the error info(screenshot below) indicates the keys are different.
image

What's more, how does global_pytorch support stylegan2-ada? Because it uses e4e's code. As far as I know, e4e does not support train on stylegan2-ada, so I used a slightly modified version in this to train with a stylegan2-ada checkpoints, and the author told me that most changes happen under models/psp.py file.

I tried to replace the entire original e4e folder with the modified e4e, and the problem still remains the same, so I think the modification only affects training process.

I've also tried to transfer stylegan2-ada's weight to stylegan2's using a tool in stylegan2-ada-pytorch, but it seems that it doesn't support 256x256 resolution, which is my need.

We use the FFHQ 1024x1024 model weight from Nvidia that was originally trained in tf. We take the pre-trained encoder checkpoint from e4e. The e4e uses Justin's stylgan2 torch implementation; we can easily transfer the tf weight to Justin's torch weights. We did not train the e4e by ourselves.

In our StyleCLIP, we just take the output w+ from e4e and input it into the stylegan2-ada torch model. We do not need to load the stylegan2-ada weight to e4e (which uses Justin's implementation).

In your case, I think the best way is to update the mapping function by yourselves.

Thanks a lot.