John K.Happy's repositories
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
aituber-kit
AITuber Kit
alpaca-lora
Instruct-tune LLaMA on consumer hardware
alpaca.cpp
Locally run an Instruction-Tuned Chat-Style LLM
dalai
The simplest way to run LLaMA on your local machine
DeepLearning
Deep Learning (Python, C, C++, Java, Scala, Go)
dvc
🦉 ML Experiments and Data Management with Git
FlexGen
Running large language models on a single GPU for throughput-oriented scenarios.
GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
Keras-GAN
Keras implementations of Generative Adversarial Networks.
lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports quantization, LoRA fine-tuning, pre-training. Apache 2.0-licensed.
llama
Inference code for LLaMA models
ollama
Get up and running with Llama 3, Mistral, Gemma 2, and other large language models.
open-webui
User-friendly WebUI for LLMs (Formerly Ollama WebUI)
reinforcement-learning
Implementation of Reinforcement Learning Algorithms. Python, OpenAI Gym, Tensorflow. Exercises and Solutions to accompany Sutton's Book and David Silver's course.
ros2
The Robot Operating System, is a meta operating system for robots.
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
stable-diffusion
A latent text-to-image diffusion model
tau
Open source distributed Platform as a Service (PaaS). A self-hosted Vercel / Netlify / Cloudflare alternative.
the-algorithm
Source code for Twitter's Recommendation Algorithm
vscode
Visual Studio Code
xmrig-6.21.1
Remtoe Repository of Local setting