fullstack's repositories
localLLM_guidance
Local LLM Agent with Guidance
superagent
🥷 SuperAgent - Deploy LLM Agents to production
agent-mimir
a command line chat client and "agent" manager for LLM's like Chat-GPT that provides the models with access to tooling and a framework with which accomplish multi-step tasks.
ai-chatbot
A full-featured, hackable Next.js AI chatbot built by Vercel
Awesome-Graph-LLM
A collection of AWESOME things about Graph-Related LLMs.
evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
fastapi
FastAPI framework, high performance, easy to learn, fast to code, ready for production
handlebars-guidance
A guidance language for controlling large language models.
CRDT-Redis
CRDTs implemented in Redis
dify
One API for plugins and datasets, one interface for prompt engineering and visual operation, all for creating powerful AI applications.
gpt4all
gpt4all: a chatbot trained on a massive collection of clean assistant data including code, stories and dialogue
iptables-docker
A bash solution for docker and iptables conflict
litellm-no-tele
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
llm-numbers
Numbers every LLM developer should know
openai-cookbook
Examples and guides for using the OpenAI API
OpenVoice
Instant voice cloning by MyShell.
OregonORSLegislativeCorpus
August 2024 data dump of ORS Statues https://oregon.public.law/statutes
remix-auth-request
Just pass me the request..
runpod-ollama-serverless
This repo containers all the helper code required to run your Ollama Service in RunPod GPU as a Serverless service.
transformers-openai-api
An OpenAI Completions API compatible server for NLP transformers models
turbo
Incremental bundler and build system optimized for JavaScript and TypeScript, written in Rust – including Turbopack and Turborepo.
ufw-docker
To fix the Docker and UFW security flaw without disabling iptables
wonda
AutoGPT prompt template for file based instructions and advice
worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by VLLM.