yuval-alaluf / stylegan3-editing

Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" (AIM ECCVW 2022) https://arxiv.org/abs/2201.13433

Home Page:https://yuval-alaluf.github.io/stylegan3-editing/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TypeError: can't convert cuda:0 device type tensor to numpy

onefish51 opened this issue · comments

commented

when I run

python inversion/video/inference_on_video.py \
--video_path dataset/time_fly.mp4 \
--checkpoint_path pretrained_models/restyle_e4e_ffhq.pt \
--output_path dataset/video_inference

a error occurred !

Traceback (most recent call last):
  File "inversion/video/inference_on_video.py", line 149, in <module>
    run_inference_on_video()
  File "/opt/conda/lib/python3.8/site-packages/pyrallis/argparsing.py", line 160, in wrapper_inner
    response = fn(cfg, *args, **kwargs)
  File "inversion/video/inference_on_video.py", line 69, in run_inference_on_video
    landmarks_transforms = np.array(list(results["landmarks_transforms"]))
  File "/opt/conda/lib/python3.8/site-packages/torch/tensor.py", line 621, in __array__
    return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

I run the code in a pytorch1.8.0_cuda11.1 docker image that I built myself .
and

numpy in /opt/conda/lib/python3.8/site-packages (1.22.2)
Python 3.8.8
GCC 7.3.0
commented

The error happens here:

landmarks_transforms = np.array(list(results["landmarks_transforms"]))

I think you could try to replace the line with:

landmarks_transforms = np.array(list(results["landmarks_transforms"].cpu()))
commented

I fixed it in function run_inference by

results["landmarks_transforms"].append(image_landmarks_transform.cpu())

but in my opinion the issues are version issues.

Hi @onefish51 ,
I wasn't able to reproduce your error with the environment from the repo so I agree that it's probably a version issue.
I tried adding your suggested fix but when I do so, the code falls at a later stage in the inference pipeline:

landmarks_transforms = np.array(list(results["landmarks_transforms"]))

Since you are working in a different environment and were able to resolve the issue, I think I will close this issue for now. If the issue also occurs in more environments I will try to find a more robust solution to the current one.