newbeeeee's starred repositories
Auto-GPT-ZH
Auto-GPT中文版本及爱好者组织 同步更新原项目 AI领域创业 自媒体组织 用AI工作学习创作变现
ChatGLM-Efficient-Tuning
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
llama-cpp-python
Python bindings for llama.cpp
ChatGPT-Next-Web
A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
RecSystem-Pytorch
推荐系统论文算法实现,包括序列推荐,多任务学习,元学习等。 Recommendation system papers implementations, including sequence recommendation, multi-task learning, meta-learning, etc.
NN-CUDA-Example
Several simple examples for popular neural network toolkits calling custom CUDA operators.
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
alpaca-lora
Instruct-tune LLaMA on consumer hardware
ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Chinese-Vicuna
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
the-algorithm
Source code for Twitter's Recommendation Algorithm
dl-engineer-guidebook
深度学习工程师生存指南
ChatGLM-finetune-LoRA
Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)