predibase / lorax

Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

Home Page:https://loraexchange.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

if LoRAX is based on punica kernels will it be able to support LoRA Adapters for Mistral NeMO 12B?

tensimixt opened this issue · comments

Feature request

if LoRAX is based on punica kernels will it be able to support LoRA Adapters for Mistral NeMO 12B? which has a vocab size > 130k.
Currently Vllm for example doesn't support vocab_size > 128512 when enable_lora=True

I think if huggingface and LoRAX are based on punica kernels they will also have this limitation or this limitation does not exist for TGI and LoRAX?

Thank you!

Motivation

be able to run inference with Mistral NeMO + LoRA Adapter (in a multi-lora world)

Your contribution

Checked various deployment providers and found the limitation