Eric Kryski's repositories
AgentGPT
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. Meanwhile, we created a new branch to build a Tabular LLM.(我们分别统一了丰富的IFT数据(如CoT数据,目前仍不断扩充)、多种训练效率方法(如lora,p-tuning)以及多种LLMs,三个层面上的接口,打造方便研究人员上手的LLM-IFT研究平台。同时tabular_llm分支构建了面向表格智能任务的LLM。
alpaca-lora
Instruct-tune LLaMA on consumer hardware
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
ChatLLM-Web
🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server.
dalai
The simplest way to run LLaMA on your local machine
FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
ggml
Tensor library for machine learning
gpt-llama.cpp
A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
gpt4all
gpt4all: a chatbot trained on a massive collection of clean assistant data including code, stories and dialogue
gpt4all-chat
gpt4all-j chat
gpt4all-ui
gpt4all chatbot ui
langchain
⚡ Building applications with LLMs through composability ⚡
llama-hub
A library of data loaders for LLMs made by the community -- to be used with GPT Index and/or LangChain
llama.cpp
Port of Facebook's LLaMA model in C/C++
LocalAI
:robot: Self-hosted, community-driven, local OpenAI-compatible API. Can be used as a drop-in replacement for OpenAI, running on CPU with consumer-grade hardware. API for ggml compatible models, for instance: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, vicuna, koala, gpt4all-j, cerebras
mlc-llm
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
obsidian-smart-connections
Chat with your notes in Obsidian! Plus, see what's most relevant in real-time! Interact and stay organized. Powered by OpenAI ChatGPT, GPT-4 & Embeddings.
privateGPT
Interact privately with your documents using the power of GPT, 100% privately, no data leaks
PyAIPersonality
A library for defining AI personalities for AI based models.We define a file format, assets and personalized scripts.
qlora
QLoRA: Efficient Finetuning of Quantized LLMs
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
rwkv.cpp
INT4 and FP16 inference on CPU for RWKV language model
skypilot
SkyPilot is a framework for easily running machine learning workloads on any cloud through a unified interface.
text-generation-webui
A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
web-llm
Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
window.ai
Use your own AI models on the web