intel / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.

Repository from Github https://github.comintel/ipex-llmRepository from Github https://github.comintel/ipex-llm

模型加载可以成功,对话报错了

gaoconggit opened this issue · comments

UR_RESULT_ERROR_OUT_OF_RESOURCES 意思是资源不够,16GB的显存跑不了32GB的模型,你可以试试小一些的模型。

我用官方的 ollama,估计版本高一点,可以勉强跑 1.8 token