horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.

Home Page:https://arxiv.org/abs/2305.11627

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Adding quantization

Duncan1115 opened this issue · comments

If I use the multiple strategies such as GPTQ + LLM-Pruner + LoRA, maybe the compressing ratio of LLM will be greatly improved with an acceptable performance?

commented

I assume the correct way to do it would go something like:
0. (optional) Increase size and topic breadth of LLM-Pruner Corpus

  1. LLM-Pruner
  2. LoRA/QLoRa
  3. GPTQ
    This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.

I assume the correct way to do it would go something like: 0. (optional) Increase size and topic breadth of LLM-Pruner Corpus

  1. LLM-Pruner
  2. LoRA/QLoRa
  3. GPTQ
    This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.

Thanks for your kind response! We also assume that if quantization needs to be applied, the correct path is as you listed. One of the reasons is that if the pruning need to be performed under a CPU, certain operations, such as SiLU, are not supported on the CPU with FP16 and below. If you apply quantization first and then proceed with pruning, it could result in quantized weights being readjusted back to fp32.

@horseee Hi, thanks for the good suggestion. And may I ask why the paper don't compare the results bewteen pure quantizing and pure pruning?

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

Thanks a lot! My question comes from the quantizing method such as GPTQ/AWQ can have a better performance with large compressing ratio than pruning method... Your answer deeply helped me~

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

@horseee hi, I have two questions hope you could reply,thx:

  1. does the model pruned by llm-pruner or other pruner tricks, could have a better inference performance under fp16;
  2. how could it be achieved to run a model pruned by llm-pruner, and then use gptq or other ways to quant the model to int8.

Hi. We conducted a quick experiment and here are the inference performance:

Pruning Ratio #Param Memory Latency Speedup BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA Average
LLaMA-7B 6.74B 12884.5MiB 69.32s 1x 73.18 78.35 72.99 67.01 67.45 41.38 42.40 63.25
LLM.int8() 6.74B 6777.7MiB 76.20s 0.91x 73.36 78.18 73.01 66.93 67.47 40.87 41.80 63.09
LLaMA-5.4B 5.47B 10488.4MiB 58.55s 1.18x 76.57 77.37 66.60 65.82 70.62 40.70 38.80 62.36
LLaMA-5.4B + LLM.int8() 5.47B 5444.37MiB 63.10s 1.09x 76.39 76.71 66.62 66.46 70.54 40.19 39.20 62.30

The latency is tested on the test set of Wikitext2. LLM.int8() slows down the inference of the LLaMA-7B model in our case, as is also mentioned in the paper of LLM.int8() with the model size of 6.7B.

@horseee hi,thx for your kind reply.
Actually I'm not intend to compare the performance of pruner and quantization, as they are two different ways to compress the model.I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

In my experiment above, the pruned model is quantized following the instruction of bitsandbytes. I didn't try GPTQ since it seems more complicated if the model is not a standard model that cannot be loaded from .from_pretrained().