Faywyn / llama-copilot.nvim

Use ollama llms for code completion

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

llama-copilot.nvim

llama-copilot is a Neovim plugin that integrates with ollama's AI models for code completion.

Installation & setup

Install it using any plugin manager, require nvim-lua/plenary.nvim.

With packer

use {
  "Faywyn/llama-copilot.nvim",
  requires = "nvim-lua/plenary.nvim"
}

Calling the setup function is not required, it is necessary if you want to use other llm or host.

-- Default config
require('llama-copilot').setup({
  host = "localhost",
  port = "11434",
  model = "codellama:7b-code",
  max_completion_size = 15, -- use -1 for limitless
  debug = false
})

Requirement

Note

Initially for codellama:7b-code (and up to 70b). It hasn't been tested with other llm model.

Usage

llama-copilot provides user commands :LlamaCopilotComplet and :LlamaCopilotAccept that can be used to trigger code generation (based on the current context) and accept the code. Here's how you can use it:

  1. Position your cursor where you want to generate code.
  2. Type :LlamaCopilotComplet and press Enter.
  3. Wait for the code to generate
  4. Type :LlamaCopilotAccept to place the completion on your file or :q to quit the open window

Example

llama-copilot.nvim.mp4
Video speed: x6 | LLM: codellama:12b-code

About

Use ollama llms for code completion

License:MIT License


Languages

Language:Lua 100.0%