Shaun Prince's repositories
ModelReady
Collection of tools for creating and running llama.cpp compatible LLMs
Auto-GPT-Plugin-AgentHub
A showcase of diverse AI agents, fostering innovation and collaboration across domains.
gpt-for-business
A ChatGPT equivalent for buisnesses that require information privacy
Agent-LLM
An Artificial Intelligence Automation Platform. AI Instruction management from various providers, has an adaptive memory, and a versatile plugin system with many commands including web browsing. Supports many AI providers and models and growing support every day.
agent-programming-guides
A guide for programming in style.
alpaca-electron
An even simpler way to run Alpaca
AutoAWQ
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
awesome-readme
A curated list of awesome READMEs
LLM-DAN-Prompts
ChatGPT DAN, Jailbreaks prompt
srt-lab-k8s
SolidRusT Networks NoLAG Dev Lab
biased-decision-maker
An application that helps users make biased decisions based on a list of options and biases.
chatgpt-api-whisper-api-voice-assistant
chatgpt api and whisper api tutorial - voice conversation with therapist
dalai-ng
The simplest way to run LLaMA on your local machine
embedchain-local
Framework to easily create local LLM powered bots over any dataset.
fastest-apt-mirror
Script to determine the fastest APT package repository mirror
gpt4free
decentralising the Ai Industry, just some language model api's...
GPTQ-for-LLM
4 bits quantization of LLMs using GPTQ
gptq-pipeline
An easy-to-use model quantization package with user-friendly apis, based on GPTQ algorithm.
llama-ggml-api-python
Python bindings for llama.cpp
privateGPT
Interact privately with your documents using the power of GPT, 100% privately, no data leaks
stable-diffusion-webui
Stable Diffusion web UI
suparious
Suparious
suparious.com
Public profile for Suparious
suparious.github.io
Personal site for Shaun "Suparious" Prince
torrent-ai-assistant
AI assistant that helps download and organize torrents
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs