imlixinyang / HiSD

Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement" (CVPR 2021 Oral).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Quick Start-Test issue

HeX-2000 opened this issue · comments

I have completed the first few steps of Quick Start,Download the datasetd and Preprocess the dataset. Train.py step has not been done because of the GPU problem then I start to try test.py.
as your suggestion,$your_input_path can be either a image file or a folder of images. then I try
python core/test.py --config configs/celeba-hq.yaml --checkpoint configs/checkpoint_256_celeba-hq.pt--input_path CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img --output_path result
test.py: error: unrecognized arguments: CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img
or python core/test.py --config configs/celeba-hq.yaml --checkpoint configs/checkpoint_256_celeba-hq.pt--input_path CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img/0.jpg --output_path result
test.py: error: unrecognized arguments: CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img/0.jpg
I don't know if it's because I haven't yet modified the 'steps' dict in the first few lines in 'core/test.py' .If it is for this reason, can you tell me how to modify the 'steps' dict?As a junior who has just studied deep-learning for one or two months, it's really a bit difficult for me,thanks a lot.

Try this:
python core/test.py --config configs/celeba-hq_256.yaml --checkpoint configs/checkpoint_256_celeba-hq.pt --input_path CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img/0.jpg --output_path result

The released code is for 256 resolution and it seems that a space blank is missed between your checkpoint and input args.
And you need to remove all the .cuda() in the script if you don't install the gpu-version pytorch.

Try this:
python core/test.py --config configs/celeba-hq_256.yaml --checkpoint configs/checkpoint_256_celeba-hq.pt --input_path CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img/0.jpg --output_path result

The released code is for 256 resolution and it seems that a space blank is missed between your checkpoint and input args.
And you need to remove all the .cuda() in the script if you don't install the gpu-version pytorch.
I didn't expect it was for such a simple reason,then i remove all the .cuda() and modify map_location=torch.device('cpu') of torch.load, runpython core/test.py --config configs/celeba-hq.yaml --checkpoint configs/checkpoint_256_celeba-hq.pt --input_path CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img/0.jpg --output_path result
then the error Traceback (most recent call last): File "core/test.py", line 42, in <module> trainer.models.gen.load_state_dict(state_dict['gen_test']) File "D:\Anaconda3\envs\HiSD\lib\site-packages\torch\nn\modules\module.py", line 1224, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Gen: Missing key(s) in state_dict: "extractors.model.7.weight", "extractors.model.7.bias". Unexpected key(s) in state_dict: "extractors.model.8.weight", "extractors.model.8.bias", "extractors.model.6.conv1.weight", "extractors.model.6.conv1.bias", "extractors.model.6.conv2.weight", "extractors.model.6.conv2.bias", "extractors.model.6.sc.weight".
and also shows some information like
size mismatch for extractors.model.5.conv1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([1024]). size mismatch for extractors.model.5.conv2.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([2048, 1024, 3, 3]). size mismatch for extractors.model.5.conv2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([2048]). size mismatch for extractors.model.5.sc.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([2048, 1024, 1, 1])form extractors.model.0 to extractors.model.5
thanks again.

This is because that you used the 128 resolution config but the checkpoint is for 256 resolution.
The config arg is expected to be --config configs/celeba-hq_256.yaml.

Thank you so much. After changing this,I can see the picture in the output folder.But I found a problemI that no matter how I modify the steps dictionary, only the last line of the steps dictionary will take effect.
{'type': 'latent-guided', 'tag': 0, 'attribute': 0, 'seed': None}, {'type': 'latent-guided', 'tag': 1, 'attribute': 0, 'seed': None}, {'type': 'latent-guided', 'tag': 2, 'attribute': 0, 'seed': None},
I tried to change the order of the three lines,and in the output image, only the tag indicated in the last line has been changed which means python core/test.py --config configs/celeba-hq_256.yaml --checkpoint configs/checkpoint_256_celeba-hq.pt --input_path CelebAMask-HQ/CelebAMask-HQ/CelebA-HQ-img/0.jpg --output_path result this code cannot complete multi-tag task in a time,so is it a normal result of this code?if it is, how to use test.py to complete multi-tag task?

Please pull the latest version and try again.
It is because of a mistake in the code that the translation always effects on the original feature but should be the translated one.
Let me know if there is still a problem.

The latest version completes the multi-tag task well and i can see all the tags have been translated.Thank you very much for giving me such detailed guidance. It's really my luck to meet such a good teacher and read your detailed and easy-understand code as a beginner.

It's my pleasure, too. Wish you a good trip in your research.