Feature request, local assistants
Zibri opened this issue · comments
I experimented with a few assistants on HF.
The problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).
I tried everything I could thing of.
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.
Please help.
I am available for chat on discord https://discordapp.com/users/Zibri/
Note: it would be great to have the feature to export the full assistant definition as a llama.cpp "main" command. (or a gpt4all prompt)
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.
None at all! Make sure your prompt format is correct, that's usually the main culprit. Could you share your model config?