llama.cpp portable gemma3 sample - getting low GPU usage
Mushtaq-BGA opened this issue · comments
Hi ,
I am using portable llama.cpp on MTL 165H platform with iGPU having 2.3GHZ freq.
I downloaded gemma3 gguf and tried running the inference using below command
./llama-cli -m $model_path --no-context-shift -n 32 --prompt "What is AI?" -t 8 -e -ngl 50 --color -c 2048 --temp 0
output of gpu usage is: I see less GPU usage and also GPU freq is going only until 1.1GHZ
Hi @Mushtaq-BGA , which gemm3 gguf are you used ?
