OpenNMT / CTranslate2

Fast inference engine for Transformer models

Home Page:https://opennmt.net/CTranslate2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Problem with GPU allocation after updating to CTranslate2 4.0.0

carolinaxxxxx opened this issue · comments

When the device_index = 1 parameter (GPU 1), the GPU 0 is charged with low value data (in my case about 263 MB) and GPU itself shows signs of work, although it should not. This is clearly the result of the CTranslate2 4.0.0. After returning to ctranslate2 3.24.0 the problem disappears.

306856309-6f1b5824-3b02-4496-bf3e-9b90b95e6554

The above example is for the whisper model, but I also tried with llm models and the same result.

This is not a bug, just from CUDA 12, it seems that it takes more memory for initializing GPU. The logic is the same as the version before. I have a small fix here to prevent initializing unused GPU. Thank you for reporting it.