ntkrnl's starred repositories
RFdiffusion
Code for running RFdiffusion
ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
node-chatgpt-api
A client implementation for ChatGPT and Bing AI. Available as a Node.js module, REST API server, and CLI app.
Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
ColossalAI
Making large AI models cheaper, faster and more accessible
LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
browser-agent
A browser AI agent, using GPT-4
self-instruct
Aligning pretrained language models with instruction data generated by themselves.
autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
bigcode-evaluation-harness
A framework for the evaluation of autoregressive code generation language models.