william-vu / twinny

The ultimate straightforward, locally or API-hosted AI code completion plugin for Visual Studio Code—like GitHub Copilot but completely free!

Home Page:https://marketplace.visualstudio.com/items?itemName=rjmacarthy.twinny

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

twinny

Tired of the so-called "free" Copilot alternatives that are filled with paywalls and signups? Look no further, developer friend!

Twinny is your definitive, no-nonsense AI code completion plugin for Visual Studio Code and compatible editors like VSCodium. It's designed to integrate seamlessly with various tools and frameworks:

Like Github Copilot but 100% free!

Install Twinny on the Visual Studio Code extension marketplace.

Main Features

Fill in the Middle Code Completion

Get AI-based suggestions in real time. Let Twinny autocomplete your code as you type.

Fill in the Middle Example

Chat with AI About Your Code

Discuss your code via the sidebar: get function explanations, generate tests, request refactoring, and more.

Additional Features

  • Operates online or offline
  • Highly customizable API endpoints for FIM and chat
  • Chat conversations are preserved
  • Conforms to the OpenAI API standard
  • Supports single or multiline fill-in-middle completions
  • Customizable prompt templates
  • Generate git commit messages from staged changes (CTRL+SHIFT+T CTRL+SHIFT+G)
  • Easy installation via the Visual Studio Code extensions marketplace
  • Customizable settings for API provider, model name, port number, and path
  • Compatible with Ollama, llama.cpp, oobabooga, and LM Studio APIs
  • Accepts code solutions directly in the editor
  • Creates new documents from code blocks
  • Copies generated code solution blocks

🚀 Getting Started

Setup with Ollama (Recommended)

  1. Install the VS Code extension here or VSCodium here.
  2. Set up Ollama as the backend by default: Install Ollama
  3. Select your model from the Ollama library (e.g., codellama:7b-instruct for chats and codellama:7b-code for auto complete).
ollama run codellama:7b-instruct
ollama run codellama:7b-code
  1. Open VS code (if already open a restart might be needed) and press ctr + shift + T to open the side panel.

You should see the 🤖 icon indicating that twinny is ready to use.

  1. See Keyboard shortcuts to start using while coding 🎉

Setup with Other Providers llama.cpp / LM Studio / Oobabooga / LiteLLM or any other provider

For setups with llama.cpp, LM Studio, Oobabooga, LiteLLM, or any other provider, you can find more details on provider configurations and functionalities here in providers.md.

  1. Install the VS Code extension here.
  2. Obtain and run your chosen model locally using the provider's setup instructions.
  3. Restart VS Code if necessary and press CTRL + SHIFT + T to open the side panel.
  4. At the top of the extension, click the 🔌 (plug) icon to configure your FIM and chat endpoints in the providers tab.
  5. It is recommended to use separate models for FIM and chat as they are optimized for different tasks.
  6. Update the provider settings for chat, including provider, port, and hostname to correctly connect to your chat model.
  7. After setup, the 🤖 icon should appear in the sidebar, indicating that Twinny is ready for use.
  8. Results may vary from provider to provider especailly if using the same model for chat and FIM interchangeably.

With Non-Local API Providers e.g, OpenAI GPT-4 and Anthropic Claude

Twinny supports OpenAI API-compliant providers.

  1. Use LiteLLM as your local proxy for the best compatibility.
  2. If there are any issues, please open an issue on GitHub with details.

Model Support

Models for Chat:

  • For powerful machines: deepseek-coder:6.7b-base-q5_K_M or codellama:7b-instruct.
  • For less powerful setups, choose a smaller instruct model for quicker responses, albeit with less accuracy.

Models for FIM Completions:

  • High performance: deepseek-coder:base or codellama:7b-code.
  • Lower performance: deepseek-coder:1.3b-base-q4_1 for CPU-only setups.

Keyboard Shortcuts

Shortcut Description
ALT+\ Trigger inline code completion
CTRL+SHIFT+/ Stop the inline code generation
Tab Accept the inline code generated
CTRL+SHIFT+T Open Twinny sidebar
CTRL+SHIFT+T CTRL+SHIFT+G Generate commit messages from staged changes

Workspace Context

Enable useFileContext in settings to improve completion quality by tracking sessions and file access patterns. This is off by default to ensure performance.

Known Issues

Visit the GitHub issues page for known problems and troubleshooting.

Contributing

Interested in contributing? Reach out on Twitter, describe your changes in an issue, and submit a PR when ready. Twinny is open-source under the MIT license. See the LICENSE for more details.

Disclaimer

Twinny is actively developed and provided "as is". Functionality may vary between updates.

Star History

Star History Chart

About

The ultimate straightforward, locally or API-hosted AI code completion plugin for Visual Studio Code—like GitHub Copilot but completely free!

https://marketplace.visualstudio.com/items?itemName=rjmacarthy.twinny

License:MIT License


Languages

Language:TypeScript 82.7%Language:CSS 15.8%Language:JavaScript 1.5%