intel / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.

Repository from Github https://github.comintel/ipex-llmRepository from Github https://github.comintel/ipex-llm

Ollama Portable Zip SIGSEV

idkSeth opened this issue · comments

Running ./ollama and ./start-ollama.sh results in SIGSEGV: segmentation violation.

How to reproduce
Follow the quickstart docs to download and start Ollama.

Intel oneAPI is installed in my system. Running setvars.sh beforehand prevents this behavior.

A text file containing the error is attached.

Environment information

Outputs of the script before and after running setvars.sh is provided.

env_after_setvars.txt
env_before_setvars.txt
error.txt

So do I. I hope to get it fixed soon

Please do not source setvars.sh, as you are using protable zip. The oneapi libs are included in the tgz.

Please do not source setvars.sh, as you are using protable zip. The oneapi libs are included in the tgz.

I initially ran Ollama without setting the variables/running source setvars.sh and the error occurred.

Running source setvars.sh prevents the error and Ollama works. It does not work if I do not run it.

@idkSeth Can you share your cpu and gpu information to me?

OS: Ubuntu oracular 24.10 x86_64
Kernel: Linux 6.11.0-19-generic
CPU: 11th Gen Intel(R) Core(TM) i7-11390H (8) @ 5.00 GHz
GPU: Intel Iris Xe Graphics @ 1.40 GHz [Integrated]