Could ipex-llm support llama3 or the other llm QLora finetune?
sanbuphy opened this issue · comments
Thank you !
Hi, @sanbuphy We support llama3 qlora finetuning, and also support llama2, chatglm3 and etc. Please refer to here for detailed example usage.
Hi, but it's GPU finetune, Do we have CPU finetune demo like qwen? I have noticed this https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/QLoRA-FineTuning but i think maybe too simple
Hi, @sanbuphy We support llama3 qlora finetuning, and also support llama2, chatglm3 and etc. Please refer to here for detailed example usage.
Hi, but it's GPU finetune, Do we have CPU finetune demo like qwen? I have noticed this https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/QLoRA-FineTuning but i think maybe too simple
We don't have CPU demo for Qwen right now. But, CPU and GPU dependencies are almost the same. GPU examples can be changed to CPU examples with a few changes.
We can add more fine-tune examples/models for CPUs later. :)