huggingface / evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

Home Page:https://huggingface.co/docs/evaluate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AttributeError: module 'sacrebleu' has no attribute '__version__'

dsj96 opened this issue · comments

commented

env

Thu May  4 11:28:29 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  Off  | 00000000:0B:00.0 Off |                    0 |
| N/A   26C    P0    41W / 163W |      0MiB / 32510MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  Off  | 00000000:89:00.0 Off |                    0 |
| N/A   31C    P0    41W / 163W |      0MiB / 32510MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
transformers                  4.29.0.dev0         /userhome/dsj/transformers
peft                          0.3.0.dev0          /userhome/dsj/peft
evaluate                      0.4.1.dev0          /userhome/dsj/evaluate/src
deepspeed                     0.9.1+7832efbb      /userhome/dsj/deepspeed
sacrebleu                     2.3.1               /userhome/dsj/sacrebleu

run cmd

model=mt0-small
config_file=ds_zero3_cpu.yaml
dataset_name=./data/fairseq/data/wmt10/subset_subset_instruction_train_dev_tst
raw_dataset_name=wmt10
r=8
lora_alpha=32
lora_dropout=0.1
text_column=SRC
label_column=TGT
lr=5e-5
num_epochs=1
batch_size=32
do_test=False
target_max_length=256
GPU=True
max_new_tokens=256
num_beams=1
no_repeat_ngram_size=3

model_name_or_path=./pre_trained_model/$model
output_dir=./trained_checkpoint/model-$model-raw_dataset_name-$raw_dataset_name-r-$r-lora_alpha-$lora_alpha-lora_dropout-$lora_dropout-text_column-$text_column-label_column-$label_column-lr-$lr-num_epochs-$num_epochs-batch_size-$batch_size-target_max_length-$target_max_length-GPU-$GPU-max_new_tokens-$max_new_tokens-num_beams-$num_beams-no_repeat_ngram_size-$no_repeat_ngram_size
echo $output_dir
accelerate launch --config_file=$config_file examples/mt0_peft_lora_ds_zero3_offload.py --model_name_or_path=$model_name_or_path --dataset_name=$dataset_name\
                    --r=$r --lora_alpha=$lora_alpha --lora_dropout=$lora_dropout --text_column=$text_column --label_column=$label_column \
                    --lr=$lr --num_epochs=$num_epochs --batch_size=$batch_size\
                    --target_max_length=$target_max_length --GPU  --output_dir=$output_dir --do_test

script

python script modified from this link

    metric = evaluate.load("sacrebleu")
    eval_result = metric.compute(predictions=eval_preds, references=dataset['validation'][label_column])
    logger.info({"bleu": eval_result["score"]})
    accelerator.print(f"{eval_result=}")

bug repo

when I run the program, I met the following bug repo,

Traceback (most recent call last):
  File "examples/mt0_peft_lora_ds_zero3_offload.py", line 419, in <module>
    main()
  File "examples/mt0_peft_lora_ds_zero3_offload.py", line 347, in main
    metric = evaluate.load("sacrebleu")
  File "/userhome/dsj/evaluate/src/evaluate/loading.py", line 751, in load
    evaluation_instance = evaluation_cls(
  File "/userhome/dsj/evaluate/src/evaluate/module.py", line 191, in __init__
Traceback (most recent call last):
  File "examples/mt0_peft_lora_ds_zero3_offload.py", line 419, in <module>
    info = self._info()
  File "/root/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-metric--sacrebleu/28676bf65b4f88b276df566e48e603732d0b4afd237603ebdf92acaacf5be99b/sacrebleu.py", line 108, in _info
    if version.parse(scb.__version__) < version.parse("1.4.12"):
AttributeError: module 'sacrebleu' has no attribute '__version__'
    main()
  File "examples/mt0_peft_lora_ds_zero3_offload.py", line 347, in main
    metric = evaluate.load("sacrebleu")
  File "/userhome/dsj/evaluate/src/evaluate/loading.py", line 751, in load
    evaluation_instance = evaluation_cls(
  File "/userhome/dsj/evaluate/src/evaluate/module.py", line 191, in __init__
    info = self._info()
  File "/root/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-metric--sacrebleu/28676bf65b4f88b276df566e48e603732d0b4afd237603ebdf92acaacf5be99b/sacrebleu.py", line 108, in _info
    if version.parse(scb.__version__) < version.parse("1.4.12"):
AttributeError: module 'sacrebleu' has no attribute '__version__'

expect

I have tried to change the sacrebleu and evaluate version, delete cache file (in /root/.cache/huggingface/modules/evaluate_modules/metrics), and add parameter evaluate.load("sacrebleu",cache_dir='../huggface_cache') , but it doesn't seem to work. Could you give me some advice?

But it works when I run line by line. This makes me very confused

Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import evaluate
>>> metric = evaluate.load("sacrebleu")
commented

I find where the problem lies.

hi I have also a same problem, can you tell me the solution of it?

In my case, you need to check if there are any files in the current path that conflict with the scaleblue library due to file with the same name.

It also happened with me, and I solved it by updating sacrebleu by pip install --upgrade sacrebleu