horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.

Home Page:https://arxiv.org/abs/2305.11627

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Force even pruning across layers

thedarkzeno opened this issue · comments

Is there a way to force the pruning to remove the same amount of parameters from all layers?
This would make the resulting model compatible with hf implementation (use from_pretrained)

Hi.

There are two methods to achieve the pruning of an equal number of parameters across all layers:

  1. Continue with block-wise pruning: You can set the parameters block_mlp_layer_start/block_mlp_layer_end/block_attention_layer_start/block_attention_layer_end to 0/N/0/N, where N represents the layer number of the model.

  2. Alternatively, you can opt for channel-wise pruning by setting the flag to --channel_wise instead of --block_wise.

However, it's important to note that this approach may significantly impact the model's performance. Pruning parameters from the first or last layers can have a substantial influence on the model's behavior, as indicated by the experimental results in Figure 3 of our paper.