prajdabre / yanmtt

Yet Another Neural Machine Translation Toolkit

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Trying to pretrain mBART model.

nikhilbyte opened this issue · comments

Hi,
I'm trying to pre-train mBART model using the following parameters:
!python pretrain_nmt.py -n 1 -nr 0 -g 1 --use_official_pretrained --fp16 --pretrained_model "facebook/mbart-large-50" --model_path "facebook/mbart-large-50" --tokenizer_name_or_path "facebook/mbart-large-50" --mono_src "/content/yanmtt/cleaned_Sanskrit_text_for_LM.txt" --shard_files --batch_size 16

I'm getting this error.

`Using label smoothing of 0.1
Using gradient clipping norm of 1.0
Using softmax temperature of 1.0
Masking ratio: 0.3
Training for: ['']
Shuffling corpus!
Zero size batch due to an abnormal example. Skipping empty batch.
Zero size batch due to an abnormal example. Skipping empty batch.
Zero size batch due to an abnormal example. Skipping empty batch.
Saving the model
Loading from checkpoint
Traceback (most recent call last):
File "pretrain_nmt.py", line 888, in
run_demo()
File "pretrain_nmt.py", line 885, in run_demo
mp.spawn(model_create_load_run_save, nprocs=args.gpus, args=(args,files,train_files,)) #
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 199, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/content/yanmtt/pretrain_nmt.py", line 488, in model_create_load_run_save
lprobs, labels, args.label_smoothing, ignore_index=tok.pad_token_id
File "/content/yanmtt/common_utils.py", line 130, in label_smoothed_nll_loss
nll_loss = -lprobs.gather(dim=-1, index=target)
RuntimeError: Size does not match at dimension 1 expected index [1, 13, 1] to be smaller than src [1, 12, 250054] apart from dimension 2`

Hi,
Your command is bound to give errors. You are missing a few things:

--langs hi_IN (since there's no language token for Sanskrit you may have to use the one for Hindi. Since you don't provide a language token the code bugs out as it uses some default language token that mbart tokenizer doesn't recognize and segments it weirdly.)

Don't use fp16 if you use multiple GPUs. Regardless I've had mixed results with fp16 so I'd avoid it.

--batch_size 16 (I think you meant this to be 16 sentences. So you need the flag --batch_size_indicates_lines. By default this means number of tokens in batch.)

Additionally, be careful about learning rates and dropouts etc. Good luck!

Hi,
Thanks for your reply.
I added the arguments.
!python pretrain_nmt.py -n 1 -nr 0 -g 1 --use_official_pretrained --langs hi_IN --batch_size_indicates_lines --pretrained_model "facebook/mbart-large-50" --model_path "facebook/mbart-large-50" --tokenizer_name_or_path "facebook/mbart-large-50" --mono_src "/content/yanmtt/cleaned_Sanskrit_text_for_LM.txt" --shard_files --batch_size 1
You see,I've put the batch size as 1 and even them I'm getting this:

RuntimeError: CUDA out of memory. Tried to allocate 978.00 MiB (GPU 0; 15.90 GiB total capacity; 14.69 GiB already allocated; 337.75 MiB free; 14.76 GiB reserved in total by PyTorch)

GPU:

Screenshot 2022-04-24 at 8 49 04 AM

Hi,

I can't really help with the GPU memory issue. 16 GBs is a tab bit small. The only thing you can try is limit the maximum sequence length via the --hard_truncate_length option. Find out what's the average sequence length in your corpus and then play with that argument. Btw why not try IndicBart which is much about a third of mbarts size and is more suited for Indic languages? Since its compact you will not run into memory issues.

Sure, Thanks.