TypeError: '>=' not supported between instances of 'NoneType' and 'float'
Slyke opened this issue · comments
$ ./build.sh run --model 'timbrooks/instruct-pix2pix' --image image.png image.png 'Change the dog to be a cat'
loaded image from image.png: 2023-10-01T12:22:36.120824
load pipeline start: 2023-10-01T12:22:36.342914
Loading pipeline components...: 86%|████████▌ | 6/7 [00:00<00:00, 6.20it/s]`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden.
Loading pipeline components...: 100%|██████████| 7/7 [00:01<00:00, 5.22it/s]
loaded models after: 2023-10-01T12:22:42.096667
Traceback (most recent call last):
File "/usr/local/bin/docker-entrypoint.py", line 297, in <module>
main()
File "/usr/local/bin/docker-entrypoint.py", line 293, in main
stable_diffusion_inference(pipeline)
File "/usr/local/bin/docker-entrypoint.py", line 143, in stable_diffusion_inference
result = p.pipeline(**remove_unused_args(p))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py", line 265, in __call__
do_classifier_free_guidance = guidance_scale > 1.0 and image_guidance_scale >= 1.0
^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'NoneType' and 'float'
You need to pass in the --scale
parameter for pix2pix
: https://github.com/fboulnois/stable-diffusion-docker#options
Same result:
$ ./build.sh run --model 'timbrooks/instruct-pix2pix' --image image.png 'Change the dog to be a cat' --scale 7.5
loaded image from image.png: 2023-10-01T20:34:57.620147
load pipeline start: 2023-10-01T20:34:57.798929
Loading pipeline components...: 57%|█████▋ | 4/7 [00:00<00:00, 6.52it/s]`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden.
Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 7.96it/s]
loaded models after: 2023-10-01T20:35:06.574883
Traceback (most recent call last):
File "/usr/local/bin/docker-entrypoint.py", line 297, in <module>
main()
File "/usr/local/bin/docker-entrypoint.py", line 293, in main
stable_diffusion_inference(pipeline)
File "/usr/local/bin/docker-entrypoint.py", line 143, in stable_diffusion_inference
result = p.pipeline(**remove_unused_args(p))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py", line 265, in __call__
do_classifier_free_guidance = guidance_scale > 1.0 and image_guidance_scale >= 1.0
^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'NoneType' and 'float'
I tried a number less than 1 and it seems to be passing that step now.
I'm getting torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.72 GiB (GPU 0; 12.00 GiB total capacity; 10.99 GiB already allocated; 0 bytes free; 11.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
even with or without --attention-slicing --half
But I think that this is a different issue. I'll troubleshoot first before asking.