horseee / LLM-Pruner

[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.

Home Page:https://arxiv.org/abs/2305.11627

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Evaluation metric (acc vs. acc_norm) for lm-evaluation-harness tasks

bokyeong1015 opened this issue · comments

Hi, thank you very much for generously open-sourcing your excellent work.

I've run the evaluation code you kindly shared and obtained the below results. I have a question regarding the metric for each task. Could you please clarify which one between acc or acc_norm [ref] was used for the PIQA, HellaSwag, ARC-e, ARC-c, and OBQA tasks? Thanks for taking the time to check this inquiry.

20%-pruned -> post-trained LLaMA from scripts/llama_prune.sh

Task BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA
LLM-Pruner paper 66.79 77.58 68.48 64.96 64.06 37.88 39.00
Reproduced (LLM-Pruner code): acc 65.20 77.15 53.19 63.69 64.77 36.26 28.80
Reproduced (LLM-Pruner): acc_norm n/a 76.93 68.63 n/a 52.27 36.95 40.40
Task BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA
Using different ver: acc 66.24 77.58 53.54 66.14 70.54 37.54 31.40
Using different ver: acc_norm n/a 78.13 71.39 n/a 65.95 39.33 41.20

Original LLaMA-7B

Task BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA
LLaMA paper 76.5 79.8 76.1 70.1 72.8 47.6 57.2
LLM-Pruner paper 73.18 78.35 72.99 67.01 67.45 41.38 42.40
Reproduced (LLM-Pruner code): acc 73.06 78.35 56.42 67.01 67.34 38.14 28.20
Reproduced (LLM-Pruner): acc_norm n/a 77.37 72.99 n/a 52.48 41.38 42.40
Task BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA
Using different ver: acc 75.05 78.67 56.93 69.93 75.29 41.89 34.60
Using different ver: acc_norm n/a 79.16 76.22 n/a 72.85 44.71 44.40

Note

  • Underlined: reported metrics in the LLM-Pruner paper
  • The scores reported in the LLM-Pruner paper are fully reproducible using this repo, and the lm-evaluation-harness version affects the scores because of recent updates.
  • [Table 1 of LLM-Pruner Paper] The evaluation is performed under different prompts, which is lower than the official results in the LLaMA paper.

Hi. I used these keys:
select_key = {
'boolq': 'acc',
'hellaswag': 'acc_norm',
'arc_easy': 'acc',
'piqa': 'acc',
'arc_challenge': 'acc_norm',
'winogrande': 'acc',
'openbookqa': 'acc_norm',
}

Besides, I saw the difference in the performance of the original LLaMA-7B. I double-checked the code and used the evaluation code in my repo to re-evaluate the performance of LLaMA-7B, and I got a very similar performance (on ARC-easy, 67.38 vs. 67.45). Does the performance of LLaMA-7B you listed above use the lm-evaluation-harness (a previous commit of lm-evaluation-harness) in my repo? Since the lm-evaluation-harness has changed a lot in these months, some results are not consistent.

Thank you for your response and clarification!

I apologize for the missing commit hash regarding the LLaMA-7B results. Upon your indication, I realized that I used a different hash for lm-evaluation-harness compared to your code.


Since the lm-evaluation-harness has changed a lot in these months, some results are not consistent.

Thanks for pointing out this point. To ensure clarity, I've updated the table above and will soon provide results using your evaluation code. Thanks again for your assistance.

Hi, I've added the results using your repo, which are fully reproducible. I also made an explicit note to avoid any confusion. Big thanks for your time and help!

Note

  • underline score: reported metrics in the LLM-Pruner paper
  • The scores reported in LLM-Pruner paper are fully reproducible using this repo, and the lm-evaluation-harness version affects the scores because of recent updates.
  • [Table 1 of LLM-Pruner Paper] The evaluation is performed under different prompts, which is lower than the official results in the LLaMA paper.

Hi😄 . Thank you very much for the detailed notes and the experimental results you contributed based on the new version of the lm-evaluation-harness!