FlagAI-Open / FlagAI

FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question]: aquila_pretrain.py: error: unrecognized arguments: --local-rank=1

caiweige opened this issue · comments

commented

Description

基于模型Aquila-7B在
FlagAI/examples/Aquila/Aquila-pretrain 进行预训练
环境:单机2卡,已经设置hostfile为10.2.170.111 slots=2
使用命令:bash local_trigger_docker.sh hostfile Aquila-pretrain.yaml Aquila-7B aquila_pretrain
出现如下报错:


[INFO] bmtrain_mgpu.sh: hostfile configfile model_name exp_name exp_version
/home/vgpu/anaconda3/lib/python3.10/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects `--local-rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
[2023-07-20 14:50:14,784] [INFO] [logger.py:85:log_dist] [Rank -1] Unsupported bmtrain
[2023-07-20 14:50:14,830] [INFO] [logger.py:85:log_dist] [Rank -1] Unsupported bmtrain
[2023-07-20 14:50:15,837] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-07-20 14:50:15,864] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
usage: aquila_pretrain.py [-h] [--env_type ENV_TYPE]
                          [--experiment_name EXPERIMENT_NAME]
                          [--model_name MODEL_NAME] [--epochs EPOCHS]
                          [--batch_size BATCH_SIZE] [--lr LR]
                          [--warmup_start_lr WARMUP_START_LR] [--seed SEED]
                          [--fp16 FP16] [--pytorch_device PYTORCH_DEVICE]
                          [--clip_grad CLIP_GRAD]
                          [--checkpoint_activations CHECKPOINT_ACTIVATIONS]
                          [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
                          [--weight_decay WEIGHT_DECAY] [--eps EPS]
                          [--warm_up WARM_UP] [--warm_up_iters WARM_UP_ITERS]
                          [--skip_iters SKIP_ITERS]
                          [--log_interval LOG_INTERVAL]
                          [--eval_interval EVAL_INTERVAL]
                          [--save_interval SAVE_INTERVAL]
                          [--save_dir SAVE_DIR] [--load_dir LOAD_DIR]
                          [--save_optim SAVE_OPTIM] [--save_rng SAVE_RNG]
                          [--load_type LOAD_TYPE] [--load_optim LOAD_OPTIM]
                          [--load_rng LOAD_RNG] [--tensorboard TENSORBOARD]
                          [--tensorboard_dir TENSORBOARD_DIR]
                          [--deepspeed_activation_checkpointing DEEPSPEED_ACTIVATION_CHECKPOINTING]
                          [--num_checkpoints NUM_CHECKPOINTS]
                          [--deepspeed_config DEEPSPEED_CONFIG]
                          [--model_parallel_size MODEL_PARALLEL_SIZE]
                          [--training_script TRAINING_SCRIPT]
                          [--hostfile HOSTFILE] [--master_ip MASTER_IP]
                          [--master_port MASTER_PORT] [--num_nodes NUM_NODES]
                          [--num_gpus NUM_GPUS] [--not_call_launch]
                          [--local_rank LOCAL_RANK] [--wandb WANDB]
                          [--wandb_dir WANDB_DIR] [--wandb_key WANDB_KEY]
                          [--already_fp16 ALREADY_FP16]
                          [--resume_dataset RESUME_DATASET]
                          [--shuffle_dataset SHUFFLE_DATASET]
                          [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2]
                          [--bmt_cpu_offload BMT_CPU_OFFLOAD]
                          [--bmt_lr_decay_style BMT_LR_DECAY_STYLE]
                          [--bmt_loss_scale BMT_LOSS_SCALE]
                          [--bmt_loss_scale_steps BMT_LOSS_SCALE_STEPS]
                          [--lora LORA] [--lora_r LORA_R]
                          [--lora_alpha LORA_ALPHA]
                          [--lora_dropout LORA_DROPOUT]
                          [--lora_target_modules LORA_TARGET_MODULES]
                          [--yaml_config YAML_CONFIG]
                          [--bmt_async_load BMT_ASYNC_LOAD]
                          [--bmt_pre_load BMT_PRE_LOAD]
                          [--pre_load_dir PRE_LOAD_DIR]
                          [--enable_sft_dataset_dir ENABLE_SFT_DATASET_DIR]
                          [--enable_sft_dataset_file ENABLE_SFT_DATASET_FILE]
                          [--enable_sft_dataset_val_file ENABLE_SFT_DATASET_VAL_FILE]
                          [--enable_sft_dataset ENABLE_SFT_DATASET]
                          [--enable_sft_dataset_text ENABLE_SFT_DATASET_TEXT]
                          [--enable_sft_dataset_jsonl ENABLE_SFT_DATASET_JSONL]
                          [--enable_sft_conversations_dataset ENABLE_SFT_CONVERSATIONS_DATASET]
                          [--enable_sft_conversations_dataset_v2 ENABLE_SFT_CONVERSATIONS_DATASET_V2]
                          [--enable_sft_conversations_dataset_v3 ENABLE_SFT_CONVERSATIONS_DATASET_V3]
                          [--enable_weighted_dataset_v2 ENABLE_WEIGHTED_DATASET_V2]
                          [--IGNORE_INDEX IGNORE_INDEX]
                          [--enable_flash_attn_models ENABLE_FLASH_ATTN_MODELS]
aquila_pretrain.py: error: unrecognized arguments: --local-rank=0
usage: aquila_pretrain.py [-h] [--env_type ENV_TYPE]
                          [--experiment_name EXPERIMENT_NAME]
                          [--model_name MODEL_NAME] [--epochs EPOCHS]
                          [--batch_size BATCH_SIZE] [--lr LR]
                          [--warmup_start_lr WARMUP_START_LR] [--seed SEED]
                          [--fp16 FP16] [--pytorch_device PYTORCH_DEVICE]
                          [--clip_grad CLIP_GRAD]
                          [--checkpoint_activations CHECKPOINT_ACTIVATIONS]
                          [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
                          [--weight_decay WEIGHT_DECAY] [--eps EPS]
                          [--warm_up WARM_UP] [--warm_up_iters WARM_UP_ITERS]
                          [--skip_iters SKIP_ITERS]
                          [--log_interval LOG_INTERVAL]
                          [--eval_interval EVAL_INTERVAL]
                          [--save_interval SAVE_INTERVAL]
                          [--save_dir SAVE_DIR] [--load_dir LOAD_DIR]
                          [--save_optim SAVE_OPTIM] [--save_rng SAVE_RNG]
                          [--load_type LOAD_TYPE] [--load_optim LOAD_OPTIM]
                          [--load_rng LOAD_RNG] [--tensorboard TENSORBOARD]
                          [--tensorboard_dir TENSORBOARD_DIR]
                          [--deepspeed_activation_checkpointing DEEPSPEED_ACTIVATION_CHECKPOINTING]
                          [--num_checkpoints NUM_CHECKPOINTS]
                          [--deepspeed_config DEEPSPEED_CONFIG]
                          [--model_parallel_size MODEL_PARALLEL_SIZE]
                          [--training_script TRAINING_SCRIPT]
                          [--hostfile HOSTFILE] [--master_ip MASTER_IP]
                          [--master_port MASTER_PORT] [--num_nodes NUM_NODES]
                          [--num_gpus NUM_GPUS] [--not_call_launch]
                          [--local_rank LOCAL_RANK] [--wandb WANDB]
                          [--wandb_dir WANDB_DIR] [--wandb_key WANDB_KEY]
                          [--already_fp16 ALREADY_FP16]
                          [--resume_dataset RESUME_DATASET]
                          [--shuffle_dataset SHUFFLE_DATASET]
                          [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2]
                          [--bmt_cpu_offload BMT_CPU_OFFLOAD]
                          [--bmt_lr_decay_style BMT_LR_DECAY_STYLE]
                          [--bmt_loss_scale BMT_LOSS_SCALE]
                          [--bmt_loss_scale_steps BMT_LOSS_SCALE_STEPS]
                          [--lora LORA] [--lora_r LORA_R]
                          [--lora_alpha LORA_ALPHA]
                          [--lora_dropout LORA_DROPOUT]
                          [--lora_target_modules LORA_TARGET_MODULES]
                          [--yaml_config YAML_CONFIG]
                          [--bmt_async_load BMT_ASYNC_LOAD]
                          [--bmt_pre_load BMT_PRE_LOAD]
                          [--pre_load_dir PRE_LOAD_DIR]
                          [--enable_sft_dataset_dir ENABLE_SFT_DATASET_DIR]
                          [--enable_sft_dataset_file ENABLE_SFT_DATASET_FILE]
                          [--enable_sft_dataset_val_file ENABLE_SFT_DATASET_VAL_FILE]
                          [--enable_sft_dataset ENABLE_SFT_DATASET]
                          [--enable_sft_dataset_text ENABLE_SFT_DATASET_TEXT]
                          [--enable_sft_dataset_jsonl ENABLE_SFT_DATASET_JSONL]
                          [--enable_sft_conversations_dataset ENABLE_SFT_CONVERSATIONS_DATASET]
                          [--enable_sft_conversations_dataset_v2 ENABLE_SFT_CONVERSATIONS_DATASET_V2]
                          [--enable_sft_conversations_dataset_v3 ENABLE_SFT_CONVERSATIONS_DATASET_V3]
                          [--enable_weighted_dataset_v2 ENABLE_WEIGHTED_DATASET_V2]
                          [--IGNORE_INDEX IGNORE_INDEX]
                          [--enable_flash_attn_models ENABLE_FLASH_ATTN_MODELS]
aquila_pretrain.py: error: unrecognized arguments: --local-rank=1
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 92658) of binary: /home/vgpu/anaconda3/bin/python
Traceback (most recent call last):
  File "/home/vgpu/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/vgpu/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/vgpu/anaconda3/lib/python3.10/site-packages/torch/distributed/launch.py", line 196, in <module>
    main()
  File "/home/vgpu/anaconda3/lib/python3.10/site-packages/torch/distributed/launch.py", line 192, in main
    launch(args)
  File "/home/vgpu/anaconda3/lib/python3.10/site-packages/torch/distributed/launch.py", line 177, in launch
    run(args)
  File "/home/vgpu/anaconda3/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/home/vgpu/anaconda3/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/vgpu/anaconda3/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
aquila_pretrain.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2023-07-20_14:50:18
  host      : vgpu
  rank      : 1 (local_rank: 1)
  exitcode  : 2 (pid: 92659)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-07-20_14:50:18
  host      : vgpu
  rank      : 0 (local_rank: 0)
  exitcode  : 2 (pid: 92658)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Alternatives

No response

torch2.0以上有这个问题,可以降版本(1.13我这边可以work), 然后针对2.0版本的我这边也改了一版,但还得测试会不会跟其他模块冲突,才会放出来

commented

先关闭如有问题重新打开本issue。谢谢。

Run:
bash dist_trigger_docker.sh hostfile Aquila-chat.yaml aquila-7b aquila_experiment
Error:

[INFO] bmtrain_mgpu.sh: hostfile configfile model_name exp_name exp_version
bmtrain_mgpu.sh: 行 84: torchrun:未找到命令

envs: 1 * 4090

torch                       2.1.0+cu118
torchaudio                  2.1.0
torchmetrics                1.2.0
torchvision                 0.16.0+cu118

hostfile: 192.168.1.5 slots=1
edit ~/FlagAI/flagai/env_args.py
self.parser.add_argument('--local_rank', default=0, type=int, help='start training from saved checkpoint')
be ineffective
What more can be done? I don't feel comfortable downgrading at this time. Are there any other options?