THUDM / SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.

Home Page:https://THUDM.github.io/SwissArmyTransformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

怎样使用DeepSpeed的offload功能降低显存占用?

yt7589 opened this issue · comments

我运行VisualGLM-6B的LoRA finetune时,由于显卡为16G显存,所以会报CUDA Out of Memory错误。我在命令行加入DeepSpeed配置文件:

gpt_options=" \
       --experiment-name finetune-$MODEL_TYPE \
       --model-parallel-size ${MP_SIZE} \
       --mode finetune \
       --train-iters 300 \
       --resume-dataloader \
       $MODEL_ARGS \
       --train-data ${train_data} \
       --valid-data ${eval_data} \
       --distributed-backend nccl \
       --lr-decay-style cosine \
       --warmup .02 \
       --checkpoint-activations \
       --save-interval 300 \
       --eval-interval 10000 \
       --save "./work/ckpt" \
       --deepspeed \
       --deepspeed_config finetune/deepspeed.json \
       --split 1 \
       --eval-iters 10 \
       --eval-batch-size 1 \
       --lr 0.0001 \
       --batch-size 1 \
       --skip-init \
       --fp16 \
       --use_lora
"

              

run_cmd="${OPTIONS_NCCL} ${OPTIONS_SAT} deepspeed --master_port 16666 --num_gpus=1 --hostfile ${HOST_FILE_PATH} finetune_visualglm.py ${gpt_options}"
echo ${run_cmd}
eval ${run_cmd}

配置文件的内容如下:

{
    "train_micro_batch_size_per_gpu": 1,
    "zero_allow_untested_optimizer": true,
    "gradient_accumulation_steps": 1,
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "initial_scale_power": 16,
        "loss_scale_window": 1000,
        "hysteresis": 2,
        "min_loss_scale": 1
    },
    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": "auto",
            "weight_decay": "auto"
        }
    },
    "scheduler": {
        "type": "WarmupLR",
        "params": {
            "warmup_min_lr": "auto",
            "warmup_max_lr": "auto",
            "warmup_num_steps": "auto"
        }
    },
    "zero_optimization": {
        "stage": 2,
        "offload_param": {
            "device": "cpu"
        },
        "offload_optimizer": {
            "device": "cpu"
        },
        "overlap_comm": true,
        "contiguous_gradients": true,
        "sub_group_size": 1e9,
        "reduce_bucket_size": "auto",
        "stage3_prefetch_bucket_size": "auto",
        "stage3_param_persistence_threshold": "auto",
        "stage3_max_live_parameters": 1e9,
        "stage3_max_reuse_distance": 1e9,
        "stage3_gather_16bit_weights_on_model_save": false
    }
}

这个配置在chatGLM下是成功的。
但是我用这个运行VisualGLM-6B微调模型时,还是会报CUDA Out of Memory错误,我的机器是128G内存,做LoRA微调应该是够用了,好像是ZeRO offload没生效,通过跟踪源码,发现在调from_pretrained的时候,调用get_model方法,其中调用model.to(device)的时候就失败,DeepSpeed应该没起作用。怎样将模型通过DeepSpeed加载到CPU内存中呢?