EleutherAI / lm-evaluation-harness

A framework for few-shot evaluation of language models.

Home Page:https://www.eleuther.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When using Accelerate for data parallel inference, using different numbers of GPUs results in different results

s1ghhh opened this issue · comments

commented

Hi, @haileyschoelkopf Thank you for your awsome open-source work. We have been evaluating using lm-eval and noticed that when using accelerate for data parallel inference, the number of GPUs utilized leads to varying results. And the deviation between these results is greater than the stderr (about 0.012x).

We have conducted extensive evaluations on Winogrande using the same settings as the Open LLM Leaderboard, with num_fewshot=5 and batch_size=1.

Here are the results we obtained:

# of GPU acc
1 0.7443
2 0.7419
3 0.7411
4 0.7269
5 0.7498
6 0.7530
7 0.7498
8 0.7443

Script for 5-shot inference with 1 GPUs:

CUDA_VISIBLE_DEVICES=0 accelerate launch -m lm_eval --model hf \
  --model_args pretrained=allenai/tulu-2-dpo-7b,trust_remote_code=True,dtype="bfloat16" \
  --tasks winogrande \
  --num_fewshot 5 \
  --batch_size 1

Script for 5-shot inference with 4 GPUs:

CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch -m lm_eval --model hf \
  --model_args pretrained=allenai/tulu-2-dpo-7b,trust_remote_code=True,dtype="bfloat16" \
  --tasks winogrande \
  --num_fewshot 5 \
  --batch_size 1

We believe this might be due to the num_fewshot. When we set num_fewshot=0, we obtain stable result: 0.6993.

Script for 0-shot inference with 1 GPUs:

CUDA_VISIBLE_DEVICES=0 accelerate launch -m lm_eval --model hf \
  --model_args pretrained=allenai/tulu-2-dpo-7b,trust_remote_code=True,dtype="bfloat16" \
  --tasks winogrande \
  --num_fewshot 0 \
  --batch_size 1

Script for 0-shot inference with 4 GPUs:

CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch -m lm_eval --model hf \
  --model_args pretrained=allenai/tulu-2-dpo-7b,trust_remote_code=True,dtype="bfloat16" \
  --tasks winogrande \
  --num_fewshot 0 \
  --batch_size 1

Our environments:

accelerate=0.27.2
transformers=4.36.2
lm_eval=0.4.0     commit 89618bf8421d27c8cf28004d616b33fc5b305ceb (HEAD -> main, origin/main, origin/HEAD)

Furthermore, we have evaluated on other servers and using the latest version, with similar observations.

Thank you in advance for your assistance!

It's probably because of #1308. So the fewshot samples used for a particular doc_id will vary depending on whether DP is used and the number of ranks. Best way to confirm would be to use a determinative sampler as in MMLU:

fewshot_config:
sampler: first_n

commented

It's probably because of #1308. So the fewshot samples used for a particular doc_id will vary depending on whether DP is used and the number of ranks. Best way to confirm would be to use a determinative sampler as in MMLU:

fewshot_config:
sampler: first_n

Hi, thank you for your timely help, which was very helpful! By selecting first_n samples as few-shot examples, I am now able to obtain stable results. However, I've noticed that the results from the first_n strategy are lower than those from the previous random sampling strategy (0.7119 < 0.7443). Perhaps for some tasks, simply selecting the first_n items is not reasonable. The solution you mentioned in #1308 seems like a good approach, and I am trying to implement it.

It's probably because of #1308. So the fewshot samples used for a particular doc_id will vary depending on whether DP is used and the number of ranks. Best way to confirm would be to use a determinative sampler as in MMLU:

fewshot_config:
sampler: first_n

Hi, thank you for your timely help, which was very helpful! By selecting first_n samples as few-shot examples, I am now able to obtain stable results. However, I've noticed that the results from the first_n strategy are lower than those from the previous random sampling strategy (0.7119 < 0.7443). Perhaps for some tasks, simply selecting the first_n items is not reasonable. The solution you mentioned in #1308 seems like a good approach, and I am trying to implement it.

Hi, have you implemented the approach mentioned in #1308? Can you share it?

commented

It's probably because of #1308. So the fewshot samples used for a particular doc_id will vary depending on whether DP is used and the number of ranks. Best way to confirm would be to use a determinative sampler as in MMLU:

fewshot_config:
sampler: first_n

Hi, thank you for your timely help, which was very helpful! By selecting first_n samples as few-shot examples, I am now able to obtain stable results. However, I've noticed that the results from the first_n strategy are lower than those from the previous random sampling strategy (0.7119 < 0.7443). Perhaps for some tasks, simply selecting the first_n items is not reasonable. The solution you mentioned in #1308 seems like a good approach, and I am trying to implement it.

Hi, have you implemented the approach mentioned in #1308? Can you share it?

Perhaps you can refer to this