intel / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.

Repository from Github https://github.comintel/ipex-llmRepository from Github https://github.comintel/ipex-llm

llama.cpp portable gemma3 sample - getting low GPU usage

Mushtaq-BGA opened this issue · comments

Hi ,
I am using portable llama.cpp on MTL 165H platform with iGPU having 2.3GHZ freq.
I downloaded gemma3 gguf and tried running the inference using below command

./llama-cli -m $model_path --no-context-shift -n 32 --prompt "What is AI?" -t 8 -e -ngl 50 --color -c 2048 --temp 0

output of gpu usage is: I see less GPU usage and also GPU freq is going only until 1.1GHZ

Image

Hi @Mushtaq-BGA , which gemm3 gguf are you used ?