huggingface / chat-ui

Open source codebase powering the HuggingChat app

Home Page:https://huggingface.co/chat

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Feature request, local assistants

Zibri opened this issue · comments

commented

I experimented with a few assistants on HF.
The problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).
I tried everything I could thing of.
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.
Please help.
I am available for chat on discord https://discordapp.com/users/Zibri/

commented

Note: it would be great to have the feature to export the full assistant definition as a llama.cpp "main" command. (or a gpt4all prompt)

I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.

None at all! Make sure your prompt format is correct, that's usually the main culprit. Could you share your model config?