Ericsongyl / tabby

Self-hosted AI coding assistant

Home Page:https://tabbyml.github.io/tabby

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

🐾 Tabby

License Code style: black Docker build status Docker pulls

architecture

Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.

Warning Tabby is still in the alpha phase

Features

  • Self-contained, with no need for a DBMS or cloud service
  • Web UI for visualizing and configuration models and MLOps.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Consumer level GPU supports (FP-16 weight loading with various optimization).

Demo

Open in Spaces

Demo

Get started: Server

Docker

NOTE: Tabby requires Pascal or newer NVIDIA GPU.

Before running Tabby, ensure the installation of the NVIDIA Container Toolkit. We suggest using NVIDIA drivers that are compatible with CUDA version 11.8 or higher.

# Create data dir and grant owner to 1000 (Tabby run as uid 1000 in container)
mkdir -p data/hf_cache && chown -R 1000 data

docker run \
  --gpus all \
  -it --rm \
  -v "/$(pwd)/data:/data" \
  -v "/$(pwd)/data/hf_cache:/home/app/.cache/huggingface" \
  -p 5000:5000 \
  -e MODEL_NAME=TabbyML/J-350M \
  -e MODEL_BACKEND=triton \
  --name=tabby \
  tabbyml/tabby

You can then query the server using /v1/completions endpoint:

curl -X POST http://localhost:5000/v1/completions -H 'Content-Type: application/json' --data '{
    "prompt": "def binarySearch(arr, left, right, x):\n    mid = (left +"
}'

We also provides an interactive playground in admin panel localhost:5000/_admin

Skypilot

See deployment/skypilot/README.md

Getting Started: Client

We offer multiple methods to connect to Tabby Server, including using OpenAPI and editor extensions.

API

Tabby has opened a FastAPI server at localhost:5000, which includes an OpenAPI documentation of the HTTP API. The same API documentation is also hosted at https://tabbyml.github.io/tabby

Editor Extensions

Development

Go to development directory.

make dev

or

make dev-triton # Turn on triton backend (for cuda env developers)

About

Self-hosted AI coding assistant

https://tabbyml.github.io/tabby

License:Apache License 2.0


Languages

Language:Python 49.9%Language:TypeScript 16.7%Language:Vim Script 14.4%Language:JavaScript 10.2%Language:Shell 4.3%Language:Dockerfile 2.0%Language:HTML 1.3%Language:Makefile 1.2%