OpenNMT / CTranslate2

Fast inference engine for Transformer models

Home Page:https://opennmt.net/CTranslate2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unexpected inference results from Flan-T5 XXL converted to ctranslate2 with version 4.2.1 and 4.1.1 (using tensor parallel)

gk-kd opened this issue · comments

I'm using the Flan-t5 XXL of the shelf model in our project and for deployment we have converted it to ctranslate2 version using following command
ct2-transformers-converter --model ~/input_folder/ --output_dir ~/flant5_ct2/

Now I'm hosting the model as gRPC server, while starting under tensor parallel mode like
ctranslate2.Translator(checkpoint_path, device="cuda", tensor_parallel=True)

I started the server with mpirun with 2 instances to allow tensor parallel to kick in. This works well and model is loaded evenly across to 2 GPUs
mpirun -n 2 python model_server.py

Now when I run inference on it, it returns following result as response to my prompt ("Who is president of united states?")
"< pad >< pad >< pad >< pad >< pad >< pad >"

Now this is a strange behaviour which happens only with ctranslate2==4.2.1

Some suggestions to fix it would really helpful here.

Do you have the same behavior with ctranslate2 4.1.1?

Do you have the same behavior with ctranslate2 4.1.1?

No it works fine with 4.1.1, but results are different between "with tensor parallel" and "without tensor parallel". I saw in 4.2.0 that some bugs have been fixed related to tensor parallel, so tried upgrading but ran into this different issue

Btw the response is like this

< pad >< pad >< pad >< pad >< pad >< pad >

I tried different quantization types like bfloat16, float16 etc... but nothing seems to work

I also experienced an issue wtih 4.2.1 Translator.
The inference with Translator with 4.2.1 produced poor results, I didn't inspect the output itself, I just looked on my metrics which dropped to zero.
This didn't happen on 4.1.1 or 3.24.0

I thought about reconverting my models using the 4.2.1 version converter (I used the 3.24.0 version converter to generate the Translators I'm using), but didn't have the time to do it yet.

I am also seeing this regression for all variants of Flan-T5 (base, large, XL). Model is just outputting <pad> repeatedly. We convert correctly to use bfloat16 as it is a known issue with T5 to use any other precision. We reverted back to 3.24.1. Performing inference without tensor parallelism, just a single GPU.