hdykokd / obsidian-copilot

A ChatGPT Copilot in Obsidian

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

πŸ” Copilot for Obsidian

GitHub release (latest SemVer) Obsidian Downloads

Copilot for Obsidian is a free and open-source ChatGPT interface right inside Obsidian. It has a minimalistic design and is straightforward to use.

  • πŸ’¬ ChatGPT UI in Obsidian.
  • πŸ› οΈ Prompt AI with your writing using Copilot commands to get quick results.
  • πŸš€ Turbocharge your Second Brain with AI.
  • 🧠 Talk to your past notes for insights.

My goal is to make this AI assistant local-first and privacy-focused. It has a local vector store and can work with local models for chat and QA completely offline! More features are under construction. Stay tuned!

UI

πŸ› οΈ Features

  • Chat with ChatGPT right inside Obsidian in the Copilot Chat window.
  • No repetitive login. Use your own API key (stored locally).
  • No monthly fee. Pay only for what you use.
  • Model selection of OpenAI, Azure and local models powered by LocalAI.
  • No need to buy ChatGPT Plus to use GPT-4.
  • No usage cap for GPT-4 like ChatGPT Plus.
  • One-click copying any message as markdown.
  • One-click saving the entire conversation as a note.
  • Use the active note as context, and start a discussion around it by switching to "QA: Active Note" in the Mode Selection menu.
    • This feature is powered by retrieval augmentation with a local vector store. No sending your data to a cloud-based vector search service!
  • Easy commands to simplify, emojify, summarize, translate, change tone, fix grammar, rewrite into a tweet/thread, count tokens and more.
  • Set your own parameters like LLM temperature, max tokens, conversation context based on your need (pls be mindful of the API cost).
  • User custom prompt! You can add, apply, edit, delete your custom prompts, persisted in your local Obsidian environment! Be creative with your own prompt templates, sky is the limit!
  • Local model support for offline chat and QA using LocalAI. Talk to your notes without internet! (experimental feature)

🎬 Video Demos

πŸŽ‰ NEW in v2.4.0: Local Copilot! No internet required!! πŸŽ‰

Please make sure you go through this Step-by-step setup guide to setup Local Copilot on your device correctly!

I've got feedback about "Use Local Copilot" toggle being unnecesary, so it is removed in v2.4.1. Now, make sure your have --cors flag enabled in your LocalAI server (or .env CORS=true if you use Docker). Then simply fill in the OpenAI Proxy Base URL as "http://localhost:8080/v1" and restart the plugin to chat with your local models.

When you are done, clear the OpenAI Proxy Base URL to switch back to non-local models.

πŸ€— New to Copilot? Quick Guide for Beginners:

  • Chat with ChatGPT, copy messages to note, save entire conversation as a note
  • QA around your past note
  • Fix grammar and spelling, Summarize, Simplify, Emojify, Remove URLs
  • Generate glossary, table of contents
  • Translate to a language of your choosing
  • Change tone: professional, casual, straightforward, confident, friendly
  • Make longer/shorter
  • Rewrite into a tweet/thread

πŸ’¬ User Custom Prompt: Create as Many Copilot Commands as You Like!

You can add, apply, edit and delete your own custom Copilot commands, all persisted in your local Obsidian environment! Check out this demo video below!

πŸ”§ Copilot Settings

The settings page lets you set your own temperature, max tokens, conversation context based on your need.

New models will be added as I get access.

You can also use your own system prompt, choose between different embedding providers such as OpenAI, CohereAI (their trial API is free and quite stable!) and Huggingface Inference API (free but sometimes times out).

βš™οΈ Installation

Copilot for Obsidian is now available in Obsidian Community Plugin!

  • Open Community Plugins settings page, click on the Browse button.
  • Search for "Copilot" in the search bar and find the plugin with this exact name.
  • Click on the Install button.
  • Once the installation is complete, enable the Copilot plugin by toggling on its switch in the Community Plugins settings page.

Now you can see the chat icon in your leftside ribbon, clicking on it will open the chat panel on the right! Don't forget to check out the Copilot commands available in the commands palette!

⛓️ Manual Installation

  • Go to the latest release
  • Download main.js, manifest.json, styles.css and put them under .obsidian/plugins/obsidian-copilot/ in your vault
  • Open your Obsidian settings > Community plugins, and turn on Copilot.

πŸ”” Note

  • The chat history is not saved by default. Please use "Save as Note" to save it. The note will have a title Chat-Year_Month_Day-Hour_Minute_Second, you can change its name as needed.
  • "New Chat" clears all previous chat history. Again, please use "Save as Note" if you would like to save the chat.
  • "Use Active Note as Context" creates a local vector index for the active note so that you can chat with super long note! To start the QA, please switch from "Conversation" to "QA: Active Note" in the Mode Selection dropdown.
  • You can set a very long context in the setting "Conversation turns in context" if needed.

πŸ“£ Again, please always be mindful of the API cost if you use GPT-4 with a long context!

πŸ€” FAQ (please read before submitting an issue)

It's not using my note as context
  • Please don't forget to switch to "QA: Active Note" in the Mode Selection dropdown in order to start the QA. Copilot does not have your note as context in "Conversation" mode. Settings
  • In fact, you don't have to click the button on the right before starting the QA. Switching to QA mode in the dropdown directly is enough for Copilot to read the note as context. The button on the right is only for when you'd like to manually rebuild the index for the active note, like, when you'd like to switch context to another note, or you think the current index is corrupted because you switched the embedding provider, etc.
  • Reference issue: logancyang#51
Unresponsive QA when using Huggingface as the Embedding Provider
  • Huggingface Inference API is free to use. It can give errors such as 503 or 504 frequently at times because their server has issues. If it's an issue for you, please consider using OpenAI or CohereAI as the embedding provider. Just keep in mind that OpenAI costs more, especially with very long notes as context.
"model_not_found"
  • You need to have access to some of the models like GPT-4 or Azure ones to use them. If you don't, sign up on their waitlist!
  • A common misunderstanding I see is that some think they have access to GPT-4 API when they get ChatGPT Plus subscription. That is not true. You need to get access to GPT-4 API to use the GPT-4 model in this plugin. Please check if you can successfully use your model in the OpenAI playground first https://platform.openai.com/playground?mode=chat&model=gpt-4. If not, you can apply for GPT-4 API access here https://openai.com/waitlist/gpt-4-api. Once you have access to the API, you can use GPT-4 with this plugin without the ChatGPT Plus subsciption!
  • Reference issue: logancyang#3 (comment)
"insufficient_quota"
  • It might be because you haven't set up payment for your OpenAI account, or you exceeded your max monthly limit. OpenAI has a cap on how much you can use their API, usually $120 for individual users.
  • Reference issue: logancyang#11
"context_length_exceeded"
  • GPT-3.5 has a 4096 context token limit, GPT-4 has 8K (there is a 32K one available to the public soon per OpenAI). So if you set a big token limit in your Copilot setting, you could get this error. Note that the prompts behind the scenes for Copilot commands can also take up tokens, so please limit your message length and max tokens to avoid this error. (For QA with Unlimited Context, use the "QA: Active Note" chain in the dropdown! Requires Copilot v2.1.0.)
  • Reference issue: logancyang#1 (comment)
Azure issue
  • It's a bit tricky to get all Azure credentials right in the first try. My suggestion is to use curl to test in your terminal first, make sure it gets response back, and then set the correct params in Copilot settings. Example:
    curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=VERSION\
      -H "Content-Type: application/json" \
      -H "api-key: YOUR_API_KEY" \
      -d "{
      \"prompt\": \"Once upon a time\",
      \"max_tokens\": 5
    }"
    
  • Reference issue: logancyang#98

When opening an issue, please include relevant console logs. You can go to Copilot's settings and turn on "Debug mode" at the bottom for more console messages!

πŸ“ Planned features (based on feedback)

  • Support embedded PDFs as context
  • Integration with ElevenLabs or Bark to let the AI speak like human
  • Unlimited context for a collection of notes.
  • Retrieval augmented generation (RAG) with your vault. Explore, brainstorm and review ideas like never before!

πŸ™ Say Thank You

If you are enjoying Copilot, please support my work by buying me a coffee here: https://www.buymeacoffee.com/logancyang

Buy Me A Coffee

Please also help spread the word by sharing about the Copilot for Obsidian Plugin on Twitter, Reddit, or any other social media platform you use.

You can find me on Twitter @logancyang.

About

A ChatGPT Copilot in Obsidian

License:GNU Affero General Public License v3.0


Languages

Language:TypeScript 94.2%Language:CSS 3.3%Language:JavaScript 2.5%