jy0205 / STCAT

[NeurIPS 2022] Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0

jianhua2022 opened this issue · comments

Hi, I am trying to run your code, but I get the Error "ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0 (pid: 69934)". I cannot solve this problem according to this error report. Could you help me to solve this problem?

In my experiments, I only use 2 GPUs (Tesla V100, 32G), the environments are installed following your instruction. The detailed errors are depicted in the following:

(stgrounding) yjh@DGX-1:~/workspace1/STCAT$ CUDA_VISIBLE_DEVICES=0,1 bash run_vidstg.sh
/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects --local_rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
WARNING:torch.distributed.run:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your appli
cation as needed.


ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0 (pid: 44291) of binary: /home/yjh/anaconda3/envs/stgrounding/bin/python3
Traceback (most recent call last):
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/yjh/anaconda3/envs/stgrounding/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

scripts/train_net.py FAILED

Failures:
[1]:
time : 2023-03-06_10:59:36
host : DGX-1
rank : 1 (local_rank: 1)
exitcode : -11 (pid: 44292)
error_file: <N/A>
traceback : Signal 11 (SIGSEGV) received by PID 44292

Root Cause (first observed failure):
[0]:
time : 2023-03-06_10:59:36
host : DGX-1
rank : 0 (local_rank: 0)
exitcode : -11 (pid: 44291)
error_file: <N/A>
traceback : Signal 11 (SIGSEGV) received by PID 44291

(stgrounding) yjh@DGX-1:~/workspace1/STCAT$

commented

Sorry, I can not figure out your problem based on this error information. Maybe you can change nproc_per_node=2?

Hello, may I ask you has this problem solved? Is it possible to use two GPUs to achieve the effect of the original paper?