intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

install issue

K-Alex13 opened this issue · comments

image

today I find this problem that I can not install this package even use the install method from intel web
please help me

Hi @K-Alex13,

On Windows, to install IPEX-LLM on Intel CPUs, you could use:

conda create -n llm python=3.11
conda activate llm

pip install --pre --upgrade ipex-llm[all]

On Windows, to install IPEX-LLM on Intel GPUs, you could use:

conda create -n llm python=3.11 libuv
conda activate llm

pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/

Please refer to here for more info regarding IPEX-LLM installation.

Please let us know for any further problems :)

thankyou