hpcaitech / ColossalAI-Examples

Examples of training models with hybrid parallelism using ColossalAI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The error happened when I did multi-node distributed training

ShangWeiKuo opened this issue Β· comments

πŸ› Describe the bug

Excuse me. When I enter the command "colossalai run --nproc_per_node 4 --host [host1 ip addr],[host2 ip addr] --master_addr [host1 ip addr] train.py", I got this message: Error: failed to run torchrun --nproc_per_node=4 --nnodes=2 --node_rank=1 --rdzv_backend=c10d --rdzv_endpoint=[host1 ip addr]:29500 --rdzv_id=colossalai-default-job train.py on [host2 ip addr]

What are the configurations I have to set in the train.py you provided with?

Environment

CUDA Version: 11.3
PyTorch Version: 1.12.0
CUDA Version in PyTorch Build: 11.3
PyTorch CUDA Version Match: βœ“
CUDA Extension: βœ“

Hi, are these two machines connected by ssh without password?