b4rtaz / distributed-llama

Tensor parallelism is all you need. Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

b4rtaz/distributed-llama Stargazers