intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

XEON and MAX with Kernel 5.15 configuration

weiseng-yeap opened this issue · comments

commented

Team,

Currently we using ubuntu server 22.04 and kernel is 5.15.

Can provide which OneAPI version and GPU driver version work with latest IPEX framework?

Thanks!