Requirement divergence and Issue to run cuda in multiprocessing
SandUhrGucker opened this issue · comments
-
System information
- Ubuntu 20.04
- Python version: 3.9.4
- pip 20.3.4
- Nvidia drivers 460.80
- Cuda Version 11.2
Hi,
I'm having issues by install and running rembg-greenscreen as you showwn in the YT Video.
Your requirements contains a requirement of numpy 1.19.4 But greenscreen requires 1.19.5
However, I tried both versions, and I run into the following execution error later:
bubu@desktop:~/share$ greenscreen -pg "WebcamFanny2015.mp4"
/home/munsch/share/WebcamFanny2015.mp4
FRAME RATE DETECTED: 25/1 (if this looks wrong, override the frame rate)
FRAME RATE: 25 TOTAL FRAMES: 428
WORKER FRAMERIPPER ONLINE
WORKER 0 ONLINE
Process Process-3:
Traceback (most recent call last):
File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/munsch/.local/lib/python3.9/site-packages/rembg/multiprocessing.py", line 24, in worker
net = Net(model_name)
File "/home/munsch/.local/lib/python3.9/site-packages/rembg/bg.py", line 74, in init
net.load_state_dict(torch.load(path, map_location=torch.device(DEVICE)))
File "/home/munsch/.local/lib/python3.9/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/munsch/.local/lib/python3.9/site-packages/torch/serialization.py", line 772, in _legacy_load
result = unpickler.load()
File "/home/munsch/.local/lib/python3.9/site-packages/torch/serialization.py", line 728, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/munsch/.local/lib/python3.9/site-packages/torch/serialization.py", line 812, in restore_location
return default_restore_location(storage, str(map_location))
File "/home/munsch/.local/lib/python3.9/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/munsch/.local/lib/python3.9/site-packages/torch/serialization.py", line 154, in _cuda_deserialize
with torch.cuda.device(device):
File "/home/munsch/.local/lib/python3.9/site-packages/torch/cuda/init.py", line 223, in enter
self.prev_idx = torch._C._cuda_getDevice()
File "/home/munsch/.local/lib/python3.9/site-packages/torch/cuda/init.py", line 160, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Do you have any Tip or solution?
Regards
Rene´
add a line code: multiprocessing.set_start_method('spawn')
as follow:
def parallel_greenscreen(file_path,
worker_nodes,
gpu_batchsize,
model_name,
frame_limit=-1,
prefetched_batches=4,
framerate=-1):
multiprocessing.set_start_method('spawn')
manager = multiprocessing.Manager()