EleutherAI / gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Home Page:https://www.eleuther.ai/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The results of running eval show only 1 digit after decimal point for acc on all tested tasks

lernerjenny opened this issue · comments

Describe the bug
The results of running eval.py show only 1 digit after decimal point for acc on all tested tasks.
If there is some configuration argument to set this, I found no mention of it.

Example:
{
"results": {
"hellaswag": {
"acc,none": 0.3,
"acc_stderr,none": 0.15275252316519466,
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.16329931618554522
},
"arc_easy": {
"acc,none": 0.3,
"acc_stderr,none": 0.15275252316519466,
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.15275252316519466
},
"piqa": {
"acc,none": 0.8,
"acc_stderr,none": 0.13333333333333333,
"acc_norm,none": 0.8,
"acc_norm_stderr,none": 0.13333333333333333
},
"sciq": {
"acc,none": 0.9,
"acc_stderr,none": 0.09999999999999999,
"acc_norm,none": 0.9,
"acc_norm_stderr,none": 0.09999999999999999
},
"arc_challenge": {
"acc,none": 0.2,
"acc_stderr,none": 0.13333333333333333,
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.13333333333333333
},
},
To Reproduce
Steps to reproduce the behavior:

  1. run python deepy.py eval.py --conf_dir pythia 1B.yml --eval_tasks lambada_openai hellaswag piqa arc_easy arc_challenge winogrande sciq
  2. observe the generated result json

Expected behavior
present a configuration argument to set the number of digits after decimal point, and show above 4 digits after decimal point by default

Proposed solution
If you have an idea for how we can fix this problem, describe it here.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):

  • GPUs:
  • Configs:

Additional context
Add any other context about the problem here.

I found the problem:

limit=10, # limit,

limit=10 causes this issue and much worse - incorrect eval results.
The following warning can be found in the lm-evaluation-harness: "--limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."

I found the problem:

limit=10, # limit,

limit=10 causes this issue and much worse - incorrect eval results. The following warning can be found in the lm-evaluation-harness: "--limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."

Yes, and specifically using limit 10 means only 10 items are run so it's mathematically impossible to have the other digits be non-zero :)