YuxinWenRick / tree-ring-watermark

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

debug error

jinganglang567 opened this issue · comments

Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.

i got this problem,what i need to do

Hi, can you check this line to see if you are running the job on a GPU? I think it indicates that you are running the code without a GPU.

E:\miniconda3\envs\tree-ring-watermark-main\python.exe D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py
E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

.
E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\diffusers\modeling_utils.py:96: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return torch.load(checkpoint_file, map_location="cpu")
E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\transformers\modeling_utils.py:399: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return torch.load(checkpoint_file, map_location="cpu")
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Using the latest cached version of the dataset since Gustavosta/Stable-Diffusion-Prompts couldn't be found on the Hugging Face Hub
Found the latest cached dataset configuration 'default' at C:\Users\宋雨轩.cache\huggingface\datasets\Gustavosta___stable-diffusion-prompts\default\0.0.0\d816d4a05cb89bde39dd99284c459801e1e7e69a (last modified on Wed Aug 21 16:50:04 2024).
Traceback (most recent call last):
File "D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py", line 217, in
main(args)
File "D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py", line 48, in main
gt_patch = get_watermarking_pattern(pipe, args, device)
File "D:\tree-ring-watermark-main\tree-ring-watermark-main\optim_utils.py", line 175, in get_watermarking_pattern
gt_patch = torch.fft.fftshift(torch.fft.fft2(gt_init), dim=(-1, -2))
RuntimeError: Unsupported dtype Half

Hello, I encountered the above error while running run_tree_ring_watermark.py. How can I resolve this issue?

Hi, can you check this line to see if you are running the job on a GPU? I think it indicates that you are running the code without a GPU.

E:\miniconda3\envs\tree-ring-watermark-main\python.exe D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py
E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate
.
E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\diffusers\modeling_utils.py:96: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return torch.load(checkpoint_file, map_location="cpu")
E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\transformers\modeling_utils.py:399: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return torch.load(checkpoint_file, map_location="cpu")
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Using the latest cached version of the dataset since Gustavosta/Stable-Diffusion-Prompts couldn't be found on the Hugging Face Hub
Found the latest cached dataset configuration 'default' at C:\Users\宋雨轩.cache\huggingface\datasets\Gustavosta___stable-diffusion-prompts\default\0.0.0\d816d4a05cb89bde39dd99284c459801e1e7e69a (last modified on Wed Aug 21 16:50:04 2024).
Traceback (most recent call last):
File "D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py", line 217, in
main(args)
File "D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py", line 48, in main
gt_patch = get_watermarking_pattern(pipe, args, device)
File "D:\tree-ring-watermark-main\tree-ring-watermark-main\optim_utils.py", line 175, in get_watermarking_pattern
gt_patch = torch.fft.fftshift(torch.fft.fft2(gt_init), dim=(-1, -2))
RuntimeError: Unsupported dtype Half

Hello, I encountered the above error while running run_tree_ring_watermark.py. How can I resolve this issue?