lyogavin / Anima

33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Would adding Parallelism speed up AirLLM?

birdup000 opened this issue · comments

Hello, I can't help to ask if you have ever tried to implement any parallelism strategies to this program to help the inference in general as far as being able to quickly process through the model. At the moment I can't seem to find one that would suite AirLLM just from looking at the code itself. I am kind of determined to think it would make a impact as far as speed in loading or inferencing.