Ma-Dan's repositories
Llama2-CoreML
Llama2 for iOS implemented using CoreML.
alpaca-lora
Instruct-tune LLaMA on consumer hardware
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
ChatGLM-Tuning
一种平价的chatgpt实现方案, 基于ChatGLM-6B + LoRA
ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Chinese-alpaca-lora
骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU部署 (Chinese LLaMA & Alpaca LLMs)
ControlVideo
Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"
DB-GPT-Hub
A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance, especially in Text-to-SQL.
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
DeepSpeedExamples
Example models using DeepSpeed
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
InstructGLM
ChatGLM-6B 指令学习|指令数据|Instruct
kaldifeat
Kaldi-compatible online & offline feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd - Provide C++ & Python API
llama
Inference code for LLaMA models
llama-recipes
Examples and recipes for Llama 2 model
LLaSM
第一个支持中英文双语语音-文本多模态对话的开源可商用对话模型。便捷的语音输入将大幅改善以文本为输入的大模型的使用体验,同时避免了基于 ASR 解决方案的繁琐流程以及可能引入的错误。
MatmulTutorial
A Easy-to-understand TensorOp Matmul Tutorial
MotionPlanning
Motion planning algorithms commonly used on autonomous vehicles. (path planning + path tracking)
multi_agent_path_planning
Python implementation of a bunch of multi-robot path-planning algorithms.
nebullvm
Plug and play modules to optimize the performances of your AI systems 🚀
Qbot
[🔥updating ...] AI 自动量化交易机器人 Qbot is an AI-oriented quantitative investment platform, which aims to realize the potential, empower AI technologies in quantitative investment. 📃 online docs: https://ufund-me.github.io/Qbot ✨ :news: qbot-mini: https://github.com/Charmve/iQuant
RWKV-LM
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
zero_nlp
中文nlp应用(数据、模型、训练、推理)
Zhongjing
A Chinese medical ChatGPT based on LLaMa, training from large-scale pretrain corpus and multi-turn dialogue dataset.