zju3dv / Vox-Fusion

Code for "Dense Tracking and Mapping with Voxel-based Neural Implicit Representation", ISMAR 2022

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When finish tracking, raise FileNotFoundError: [Errno 2] No such file or directory

jenkinLiuu opened this issue · comments

I have changed the path of data and tracking all the frame successfully.
But when finishing tracking, An error occurred.

********** current num kfs: 40 **********
tracking frame: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1999/1999 [46:28<00:00,  1.39s/it]
========== stop_mapping set ==========
******* tracking process died *******
Process Process-2:
Traceback (most recent call last):
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/audl-tmp/src/mapping.py", line 86, in spin
    tracked_frame = kf_buffer.get()
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/queues.py", line 122, in get
    return _ForkingPickler.loads(res)
  File "/root/miniconda3/envs/py310/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 355, in rebuild_storage_fd
    fd = df.detach()
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/resource_sharer.py", line 86, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/connection.py", line 502, in Client
    c = SocketClient(address)
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/connection.py", line 630, in SocketClient
    s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Is anyone have the same error?

I have changed the path of data and tracking all the frame successfully. But when finishing tracking, An error occurred.

********** current num kfs: 40 **********
tracking frame: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1999/1999 [46:28<00:00,  1.39s/it]
========== stop_mapping set ==========
******* tracking process died *******
Process Process-2:
Traceback (most recent call last):
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/audl-tmp/src/mapping.py", line 86, in spin
    tracked_frame = kf_buffer.get()
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/queues.py", line 122, in get
    return _ForkingPickler.loads(res)
  File "/root/miniconda3/envs/py310/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 355, in rebuild_storage_fd
    fd = df.detach()
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/resource_sharer.py", line 86, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/connection.py", line 502, in Client
    c = SocketClient(address)
  File "/root/miniconda3/envs/py310/lib/python3.10/multiprocessing/connection.py", line 630, in SocketClient
    s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Is anyone have the same error?

I also meet the same question. Have you fixed it?

Is anyone have the same error?

I also meet the same question. Have you fixed it?

Not yet, I guess it is the problem about version of multiprocessing, but I don't know how to get started.

I modified line 83 of the src/mapping. py file "while True:" as "while len(self.keyframe_graph) < 40:"
image

I modified line 83 of the src/mapping. py file "while True:" as "while len(self.keyframe_graph) < 40:"

It really works!!! thanks so much!!