Does ctransformers boost the inference speed in llm inference?
pradeepdev-1995 opened this issue · comments
PRADEEP T commented
I have converted my finetuned hugging face model to .gguf format and triggered the inference with ctransformers.
I am using a CUDA GPU machine.
But i did not observe any kind of inference speed improvement after the inference by ctransformers. Observing the same latency in transformer based infernce and ctransformer based inference.