mostlygeek / llama-swap

Model swapping for llama.cpp (or any local OpenAI API compatible server)

Repository from Github https://github.commostlygeek/llama-swapRepository from Github https://github.commostlygeek/llama-swap

Change profile model name to use a : (colon)

mostlygeek opened this issue · comments

Currently profiles use this format coding/model-name. This splits into profile-name/model-name. However, this wouldn't work well for servers like vllm which may have a / in the model name.

Instead use a : (colon) which would result in a profile and model name like coding:Qwen/qwen-32B.

This would unfortunately break backwards compatibility.