zyushun / Adam-mini

Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Adam-mini

This repository contains PyTorch implementation of Adam-mini, a mini-version of Adam that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint.

Adam-mini reduces memory by cutting down the learning rate (lr) resources in Adam (i.e., $1/\sqrt{v}$): we argue that >90% of these lr in $v$ could be harmlessly removed if we:

(1) carefully partition the parameters into blocks following our proposed principle related to Hessian structure.
(2) assign a single but good lr to each parameter block.

We find a cheap and effective way to reach these requirements. The resulting algorithm is shown below in Algorithm 1. Check out more detailed descriptions in our paper: Adam-mini: Use Fewer Learning Rates To Gain More.

How to use

You can use Adam-mini optimizer as follows. Our implementation supports popular distributed frameworks including DDP, FSDP, and DeepSpeed.

import Adam_mini

optimizer = Adam_mini.Adam_mini(
model = model, 
lr = learning_rate, 
weight_decay = weight_decay, 
beta1 = beta1, 
beta2 = beta2, 
epsilon = epsilon,
model_sharding = True, 
n_embd = n_embd, 
n_head = n_head, 
n_query_groups = n_query_groups)

Regarding all the hyperparameters including learning rate (lr), weight_decay, beta1, beta2, epsilon, we recommend using the same values as those used for AdamW.

If you are training a language model, please pass the following info to Adam-mini:

  • model_sharding: set to True if you are using model parallelism with more than 1 GPU, including FSDP and zero_1,2,3 in Deepspeed. Set to False if you are using DDP or single-GPU training.

  • n_embd: number of embedding dimensions. Could be unspecified if you are training non-transformer models.

  • n_head: number of attention heads. Could be unspecified if you are training non-transformer models.

  • n_query_groups: number of query groups in Group query Attention. If not specified, it will be equal to n_head. Could be unspecified if you are training non-transformer models.

We here provide sample code on pre-training, SFT, and RLHF. You need 2xA800-80GB or 2xA100-80GB GPUs to run the experiments below.

Example 1: Pre-training

We pre-train GPT2 series (125M-1.5B) using NanoGPT codebase under DDP framework. Install dependencies from pip:

conda env create -f gpt2/environment.yml
conda activate gpt2
cd gpt2

Run the code for GPT2 pre-training:

bash run_gpt2.sh 

We also pre-train Llama series (1B and 7B) using TinyLlama codebase under FSDP framework. We are now wrapping up the code and it will come soon.

Example 2: Supervised Fine-tuning and RLHF

We fine-tune Llama2-7B using ReMax codebase under DeepSpeed framework. Install dependencies from pip:

conda env create -f RLHF/environment.yml
conda activate rlhf
cd RLHF

Run the code for SFT with LoRA :

bash training_scripts/sft/run_sft_lora.sh 

Run the code for full-parameter SFT :

bash training_scripts/sft/run_sft_full.sh

Run the code for reward model training in RLHF

bash training_scripts/reward/run_reward.sh 

Run the code for reward optimization in RLHF using ReMax:

bash training_scripts/po/remax/run_remax.sh 

Remarks

How to use Adam-mini in Huggingface Trainer. If you are using Huggingface Trainer, please overwrite "create_optimizer" as follows to change optimizer:

 def create_optimizer(self) -> "torch.optim.Optimizer":
        if self.optimizer is None:
            if (self.finetuning_args.use_adammini):
                self.optimizer = Adam_mini(model = self.model, lr = self.args.learning_rate, weight_decay = self.args.weight_decay, 
                                           beta1 = self.args.adam_beta1, beta2 = self.args.adam_beta2, model_sharding = True, 
                                           n_embd = 4096, n_head = 32, n_query_groups = 32)
        return super().create_optimizer()

About checkpoint saving: If you are using FSDP distributed framework, please set "use_orig_params = False" in your FSDPStrategy. This allows you to save and load checkpoint without any issue (as suggested by issues #5). Conversely, using the default setting of "use_orig_params = True" may result in errors during checkpoint saving.

About CPU offload: Our current implementation of Adam-mini supports CPU offload in FSDP, while it does not support CPU offload in DeepSpeed. Please turn off offload when using DeepSpeed. We will resolve this issue soon.

Acknowledgements

The above code is heavily based on the codebase of NanoGPT, TinyLlama, ReMax, and DeepSpeed.

Citation

If you find this code helpful, please cite our paper in the following format.

@article{zhang2024adam,
  title     = {Adam-mini: Use Fewer Learning Rates To Gain More},
  author    = {Zhang, Yushun and Chen, Congliang  and Li, Ziniu and Ding, Tian and Wu, Chenwei and Ye, Yinyu and Luo, Zhi-Quan and Sun, Ruoyu},
  booktitle = {arXiv preprint arXiv:2406.16793},
  year      = {2024},
}

About

Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793


Languages

Language:Python 54.6%Language:Jupyter Notebook 44.5%Language:Shell 0.9%