Piji Li's repositories
LLaMA-Efficient-Tuning
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)
baichuan-7B
A large-scale 7B pretraining language model developed by Baichuan
FindTheChatGPTer
ChatGPT爆火,开启了通往AGI的关键一步,本项目旨在汇总那些ChatGPT的开源平替们,包括文本大模型、多模态大模型等,为大家提供一些便利
ChatGLM-Efficient-Tuning
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
methane-gapfill-ml
Python codebase for gap-filling eddy covariance methane fluxes at FLUXNET-CH4 wetlands with machine learning.
agents
An Open-source Framework for Autonomous Language Agents
Chat-Haruhi-Suzumiya
Chat凉宫春日, 由李鲁鲁, 冷子昂等同学开发的模仿二次元对话的聊天机器人。
ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
dify
An Open-Source Assistants API and GPTs alternative. Dify.AI is an LLM application development platform. It integrates the concepts of Backend as a Service and LLMOps, covering the core tech stack required for building generative AI-native applications, including a built-in RAG engine.
finetune-gpt2xl
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
llama-recipes
Examples and recipes for Llama 2 model
mistral-src
Reference implementation of Mistral AI 7B v0.1 model.
Neural-Brain-Decoding
The purpose of this repository is to collect and investigate language oriented neural decoding work
SwinLLama
SwinLLama for medical report generation
TinyLlama
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
transpeeder
train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism
XAgent
An Autonomous LLM Agent for Complex Task Solving
zero_nlp
中文nlp应用(大模型、数据、模型、训练、推理)