liucongg / ChatGLM-Finetuning

基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

执行 train.py过程报错 exits with return code = -9

yhx0105 opened this issue · comments

大佬好,当我使用执行多卡训练时,执行指令

CUDA_VISIBLE_DEVICES=0,1,2,3 deepspeed --master_port 520 train.py \
                --train_path data/spo_0.json \
                --model_name_or_path ChatGLM-6B/ \
                --per_device_train_batch_size 1 \
                --max_len 1560 \
                --max_src_len 1024 \
                --learning_rate 1e-4 \
                --weight_decay 0.1 \
                --num_train_epochs 2 \
                --gradient_accumulation_steps 4 \
                --warmup_ratio 0.1 \
                --mode glm \
                --train_type freeze \
                --freeze_module_name "layers.27.,layers.26.,layers.25.,layers.24." \
                --seed 1234 \
                --ds_file ds_zero2_no_offload.json \
                --gradient_checkpointing \
                --show_loss_step 10 \
                --output_dir ./output-glm

报错
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s][2023-09-10 11:45:59,207] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 6293 [2023-09-10 11:45:59,501] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 6294 [2023-09-10 11:45:59,754] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 6295 [2023-09-10 11:45:59,755] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 6296 [2023-09-10 11:45:59,967] [ERROR] [launch.py:434:sigkill_handler] ['/usr/bin/python', '-u', 'train.py', '--local_rank=3', '--train_path', 'data/spo_0.json', '--model_name_or_path', 'chatglm_6b', '--per_device_train_batch_size', '1', '--max_len', '1560', '--max_src_len', '1024', '--learning_rate', '1e-4', '--weight_decay', '0.1', '--num_train_epochs', '2', '--gradient_accumulation_steps', '4', '--warmup_ratio', '0.1', '--mode', 'chatglm_6b', '--train_type', 'ptuning', '--seed', '1234', '--ds_file', 'ds_zero2_no_offload.json', '--gradient_checkpointing', '--show_loss_step', '10', '--pre_seq_len', '16', '--prefix_projection', 'True', '--output_dir', './output-glm2'] exits with return code = -9

当我执行单卡指令时

CUDA_VISIBLE_DEVICES=0 deepspeed --master_port 520 train.py \
                --train_path data/spo_0.json \
                --model_name_or_path ChatGLM-6B/ \
                --per_device_train_batch_size 1 \
                --max_len 1560 \
                --max_src_len 1024 \
                --learning_rate 1e-4 \
                --weight_decay 0.1 \
                --num_train_epochs 2 \
                --gradient_accumulation_steps 4 \
                --warmup_ratio 0.1 \
                --mode glm \
                --train_type freeze \
                --freeze_module_name "layers.27.,layers.26.,layers.25.,layers.24." \
                --seed 1234 \
                --ds_file ds_zero2_no_offload.json \
                --gradient_checkpointing \
                --show_loss_step 10 \
                --output_dir ./output-glm

报错Loading checkpoint shards: 29%|█████████████████████████████████████████████████▏ | 2/7 [00:39<01:26, 17.39s/it][2023-09-10 11:47:43,374] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 6562 [2023-09-10 11:47:43,375] [ERROR] [launch.py:434:sigkill_handler] ['/usr/bin/python', '-u', 'train.py', '--local_rank=0', '--train_path', 'data/goods_price_model.jsonl', '--model_name_or_path', 'chatglm_6b', '--per_device_train_batch_size', '1', '--max_len', '768', '--max_src_len', '512', '--learning_rate', '1e-4', '--weight_decay', '0.1', '--num_train_epochs', '2', '--gradient_accumulation_steps', '4', '--warmup_ratio', '0.1', '--mode', 'chatglm_6b', '--train_type', 'ptuning', '--seed', '1234', '--ds_file', 'ds_zero2_no_offload.json', '--gradient_checkpointing', '--show_loss_step', '10', '--pre_seq_len', '16', '--prefix_projection', 'True', '--output_dir', './output-glm'] exits with return code = -9 请问是什么原因呢?

哥,请问你解决了吗

哥,请问你解决了吗
cpu内存不足,deepspeed可以试着把模型参数都放到gpu上,我是换了个cpu内存更大的机器解决了

好的,感谢🙏