intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support for MTL-H & MTL-U iGPU on Linux

huichuno opened this issue · comments

The documentation on IPEX LLM website specifically mentioned support for MTL iGPU on Windows but not on Linux. Please add a documentation to support MTL iGPU on Linux as well.