DawnShine's starred repositories
FlagEmbedding
Retrieval and Retrieval-augmented LLMs
promptbench
A unified evaluation framework for large language models
MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
VisualGLM-6B
Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
promptflow
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
LLaMA-Factory
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
ColossalAI
Making large AI models cheaper, faster and more accessible
eat_pytorch_in_20_days
Pytorch🍊🍉 is delicious, just eat it! 😋😋
Megatron-LM
Ongoing research training transformer models at scale
flash-attention
Fast and memory-efficient exact attention
PaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
alpaca-lora
Instruct-tune LLaMA on consumer hardware
Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!