huggingface / llm-vscode

LLM powered development for VSCode

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Working with ollama or llama.cpp

region23 opened this issue · comments

With the publication of codellama, it became possible to run LLM on a local machine using ollama or llama.cpp.
How to configure your extension to work with local codellama?

commented

hello @region23

yes it is possible to use a local model. What you'd need to do is:

  1. Serve the model locally at some endpoint
  2. And change the settings accordingly

change it to your local endpoint
image

and make sure to update the prompt template
image

image

HF Code Error: code - 400; msg - Bad Request

Снимок экрана 2023-08-31 в 18 07 59

curl to API is working
Снимок экрана 2023-08-31 в 18 29 09

For now ollama's API is not supported, it's on the todo list though!

cf huggingface/llm-ls#17

Also created an issue for llama.cpp : huggingface/llm-ls#28

This issue is stale because it has been open for 30 days with no activity.

This issue is stale because it has been open for 30 days with no activity.

Is there a timeline for when feat: Add adaptors for ollama and openai #117 might be merged?

Finishing the last touches of fixes on llm-ls and testing everything works as expected for 0.5.0 and we should be good to go for a release. I'd say either I find some time this week-end or next week :)