python evaluate.py throws up ZeroDivisionError
shra1all opened this issue · comments
Hi
I was able to train the model.
When I try to evaluate the model using the command, I get this following error. How do I overcome this ?
(pytG) Power-Ubuntu-18-9:~/im2latex-master$ python evaluate.py --split=test --model_path=/home/im2latex-master/save/best_ckpt.pt --data_path=/home/im2latex-master/data --batch_size=8
Load vocab including 394 words!
0%| | 0/1295 [00:00<?, ?it/s]/home/anaconda3/envs/pytG/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448278899/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
0%| | 0/1295 [00:01<?, ?it/s]
Loaded 0 formulas from ./results/result.txt
Loaded 0 formulas from ./results/ref.txt
Traceback (most recent call last):
File "evaluate.py", line 89, in
main()
File "evaluate.py", line 84, in main
score = score_files(args.result_path, args.ref_path)
File "/home/im2latex-master/model/score.py", line 31, in score_files
"BLEU-4": bleu_score(refs, hyps)*100,
File "/home/im2latex-master/model/score.py", line 68, in bleu_score
BLEU_4 = nltk.translate.bleu_score.corpus_bleu(
File "/home/anaconda3/envs/pytG/lib/python3.8/site-packages/nltk/translate/bleu_score.py", line 205, in corpus_bleu
p_n = [
File "/home/anaconda3/envs/pytG/lib/python3.8/site-packages/nltk/translate/bleu_score.py", line 206, in
Fraction(p_numerators[i], p_denominators[i], _normalize=False)
File "/home/anaconda3/envs/pytG/lib/python3.8/fractions.py", line 178, in new
raise ZeroDivisionError('Fraction(%s, 0)' % numerator)
ZeroDivisionError: Fraction(0, 0)
I haven't changed anything on the repo copied. Training is successful, but this error shows in evaluating ( by default on test data).
Thanks in advance.
Shravan
I'm getting a similar kind of an error. Tried upgrading my nltk version, but there is no sign of improvement.
I'm getting a similar kind of error. is there any improvement or solution
change beam size from 5 to 1
parser.add_argument("--beam_size", type=int, default=1)