marella / ctransformers

Python bindings for the Transformer models implemented in C/C++ using GGML library.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does ctransformers boost the inference speed in llm inference?

pradeepdev-1995 opened this issue · comments

I have converted my finetuned hugging face model to .gguf format and triggered the inference with ctransformers.
I am using a CUDA GPU machine.
But i did not observe any kind of inference speed improvement after the inference by ctransformers. Observing the same latency in transformer based infernce and ctransformer based inference.