Junjue-Wang / LoveDA

[NeurIPS 2021] LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Train Problem

Sixsheepdad opened this issue · comments

I tried: !bash ./scripts/predict_test.sh (I use this in Colab)
But I get a error:

/usr/local/lib/python3.10/dist-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
INFO:numexpr.utils:NumExpr defaulting to 2 threads.
usage: train.py [-h] [--local_rank LOCAL_RANK] [--config_path CONFIG_PATH] [--model_dir MODEL_DIR]
...
train.py: error: unrecognized arguments: --local-rank=0
[2023-10-29 16:36:10,361] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 2) local_rank: 0 (pid: 2895) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launch.py", line 196, in
main()
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2023-10-29_16:36:10
host : d42768eb76b2
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 2895)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Please Tell Me How Can I Solve This?

I encountered the same problem when I used "bash ./scripts/train_hrnetw32.sh" in ubuntu. Have you solved it?

1.Training with two GPUs on a single machine using a shell script:
qwe
2.Version Matching: Adjusting the versions of PyTorch and its components, as well as TensorFlow and its components to ensure compatibility.
3.Disabling Multi-threading in CV and NumPy: Setting num_workers=0 in loveda.py, or finding alternative methods to disable nested multi-threaded calls in CV and NumPy to avoid deadlocks(using google to search).

Feel free to discuss further!

I encountered the same problem when I used "bash ./scripts/train_hrnetw32.sh" . Have you solved it?