orpatashnik / StyleCLIP

Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Latent Mapper never trains

RahulBhalley opened this issue · comments

Hi, thanks for StyleCLIP work.

But when I train Latent Mapper using instructions under Editing via Latent Mapper section my latent mapper just never trains.

I get a best_model.pt of 145.34 MB. whereas latent mapper (including all 3 spatial control levels) is just 12.6 MB. So I put extract the mapper parameters and load them in a new latent_mappers.LevelsMapper instance. But then I do inference while traversing in latent space in the direction of what new LevelsMapper well trained. Something like this: new_latent = latent_code_init + mapper(latent_code_init) * I but there are not manipulations happening in image.

Here are samples (initial image and manipulated image).

1) For description "bob-cut hairstyle"

0_0 0
19_0 095

2) For description "smiling face"

0_0 0
49_0 098

Just found out that it does work but only if I load decoder parameters too. But shouldn't that be locked during training? I mean, aren't we supposed to change only latent mapper to manipulate different aspect of an image?

hi! could you provide the commands you used for training, please? I'm trying the same, but the generator doesn't seem to work correctly