brianpetro / obsidian-smart-connections

Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3

Home Page:https://smartconnections.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FR Support local server for embeddings

ArtificialAmateur opened this issue · comments

Jumping off of #302

Like the local server options for Smart Chat, similar work can be done for embeddings.

The OpenAI format API (which LM Studio and Ollama support) is /v1/embeddings

I'd love this, the embedded WASM models don't seem to saturate the CPU / GPU so it takes ages...

Makes sense. Thanks for the feature request 😊🌴