oobabooga

oobabooga

User data from Github https://github.com/oobabooga

Home Page:https://patreon.com/oobabooga

GitHub:@oobabooga

oobabooga's repositories

text-generation-webui

The definitive Web UI for local AI, with powerful features and easy setup.

Language:PythonLicense:AGPL-3.0Stargazers:45369Issues:349Issues:4055

one-click-installers

Simplified installers for oobabooga/text-generation-webui.

Language:PythonLicense:AGPL-3.0Stargazers:561Issues:21Issues:70

flash-attention

Fast and memory-efficient exact attention - Windows wheels

Language:PythonLicense:BSD-3-ClauseStargazers:36Issues:1Issues:0

llama-cpp-python-cuBLAS-wheels

Wheels for llama-cpp-python compiled with cuBLAS support

Language:HTMLLicense:UnlicenseStargazers:27Issues:2Issues:0

llm-tools

Various scripts for working with local LLMs

Language:PythonStargazers:15Issues:0Issues:0
Language:Jupyter NotebookStargazers:13Issues:1Issues:0

SillyTavern

LLM Frontend for Power Users.

Language:JavaScriptLicense:AGPL-3.0Stargazers:11Issues:0Issues:0

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Language:PythonLicense:Apache-2.0Stargazers:10Issues:0Issues:0

exllamav2

A fast inference library for running LLMs locally on modern consumer-class GPUs

Language:PythonLicense:MITStargazers:8Issues:1Issues:0

llama-cpp-python-basic

Python bindings for llama.cpp

Language:PythonLicense:MITStargazers:7Issues:1Issues:0
Language:PythonStargazers:7Issues:1Issues:0

chatbot-ui

An open source ChatGPT UI.

Language:TypeScriptLicense:MITStargazers:6Issues:1Issues:0
Language:PythonLicense:GPL-3.0Stargazers:6Issues:1Issues:0

bitsandbytes-windows-webui

Windows compile of bitsandbytes for use in text-generation-webui.

Language:HTMLLicense:MITStargazers:5Issues:2Issues:0

gradio

Create UIs for your machine learning model in Python in 3 minutes

Language:PythonLicense:Apache-2.0Stargazers:5Issues:1Issues:0

whisper

Robust Speech Recognition via Large-Scale Weak Supervision

Language:PythonLicense:MITStargazers:5Issues:0Issues:0

AutoGPTQ

An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.

Language:PythonLicense:MITStargazers:4Issues:1Issues:0

llama-cpp-binaries

llama.cpp server in a Python wheel.

Language:PythonLicense:AGPL-3.0Stargazers:4Issues:0Issues:0

AutoAWQ

AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:

Language:PythonLicense:MITStargazers:3Issues:0Issues:0

GPTQ-for-LLaMa-CUDA

A combination of Oobabooga's fork and the main cuda branch of GPTQ-for-LLaMa in a package format.

Language:PythonLicense:Apache-2.0Stargazers:3Issues:0Issues:0

exllamav3

An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs

Language:PythonLicense:MITStargazers:2Issues:0Issues:0

llama.cpp

Port of Facebook's LLaMA model in C/C++

Language:C++License:MITStargazers:2Issues:1Issues:0
Stargazers:2Issues:0Issues:0
Language:CudaLicense:MITStargazers:1Issues:0Issues:0

bitsandbytes

8-bit CUDA functions for PyTorch

Language:PythonLicense:MITStargazers:1Issues:0Issues:0

BlockMerge_Gradient

Merge Transformers language models by use of gradient parameters.

Language:PythonLicense:Apache-2.0Stargazers:1Issues:1Issues:0
License:AGPL-3.0Stargazers:1Issues:0Issues:0