intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

try to test multi xpu with example

K-Alex13 opened this issue · comments

image
Due to the huggingface download problem, I download the model from following link.
https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main
Replace the model with the model's URL. And the issue comes up. Not sure what is going wrong Please help me.

Hi, @K-Alex13 , if you have downloaded the model from https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main, please just replace 'Qwen/Qwen1.5-14B-Chat' with your local model folder path here(https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/GPU/Deepspeed-AutoTP/run_qwen_14b_arc_2_card.sh#L38).

Yes I already use this method, the error comes up after the process you mentioned

And the error mentioned missing file are also not in the Qwen/Qwen1.5-14B-Chat files.

And the error mentioned missing file are also not in the Qwen/Qwen1.5-14B-Chat files.

If model.safetensors.index.json is not in your local folder, such error message would still occur. You may need to check whether all model files are available and complete in your local model folder.

image
what is the function of low-bit here. I think it is 4 bit initial that the gpu needed will less than 16G so I do not know if this use two gpu here. or can you please tell how to check the gpu usage during inference?

image
Why gpu 0 did not inference the results and gpu1 did.

what is the function of low-bit here. I think it is 4 bit initial that the gpu needed will less than 16G so I do not know if this use two gpu here. or can you please tell how to check the gpu usage during inference?

  • As we introduced in README, you could specify other low bit optimizations (such as fp8) through --low-bit.
  • If you want to monitor GPU usage, you could use a tool named xpu-smi. Use sudo apt install xpu-smi to install, then you could use sudo xpu-smi stats -d 0 to check memory usage of GPU 0.

Why gpu 0 did not inference the results and gpu1 did.

  • Both GPU did inference but we only print inference result of RANK 0 here. In the log, [0] corresponds to output message of RANK 0 while [1] is RANK 1.

image
still not working

image still not working

According to your screenshot, maybe you could try sudo apt install libmetee and sudo apt install libmetee-dev.

how to use them

how to use them

Sorry but I have no idea about the meaning of 'them'. ME TEE Library (libmetee/libmetee-dev) is a C library to access CSE/CSME/GSC firmware via, and xpu-smi tool seems to need. Could you use xpu-smi now?

I install the packages you said above and try to us xpu-smi, same error comes up

By the way I want to know if this is not a method which use two gpu as a bigger gpu to inference message. It just put model in two different gpu separately and inference separately?

I install the packages you said above and try to us xpu-smi, same error comes up

Maybe you could try these steps?

sudo apt-get autoremove libmetee-dev
sudo apt-get autoremove libmetee
sudo apt-get install libmetee
sudo apt-get install libmetee-dev
sudo apt-get install xpu-smi

By the way I want to know if this is not a method which use two gpu as a bigger gpu to inference message. It just put model in two different gpu separately and inference separately?

The model is separated and put to two GPUs, each GPU need less memory to inference. In this way, you could treat two GPUs as a bigger one.