ldwang's repositories
agentflow
Complex LLM Workflows from Simple JSON.
Anima
第一个开源的基于QLoRA的33B中文大语言模型First QLoRA based open source 33B Chinese LLM
Aquila2
The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.
argilla
✨Argilla: the open-source data curation platform for LLMs
awesome-llm-human-preference-datasets
A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.
block-recurrent-transformer-pytorch
Implementation of Block Recurrent Transformer - Pytorch
detect-pretrain-code
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
DISC-FinLLM
DISC-FinLLM,**金融大语言模型(LLM),旨在为用户提供金融场景下专业、智能、全面的金融咨询服务。DISC-FinLLM, a Chinese financial large language model (LLM) designed to provide users with professional, intelligent, and comprehensive financial consulting services in financial scenarios.
fairscale
PyTorch extensions for high performance and large scale training.
fastmoe
A fast MoE impl for PyTorch
ggml
Tensor library for machine learning
guidance
A guidance language for controlling large language models.
langchain
⚡ Building applications with LLMs through composability ⚡
langchain-learning
langchain学习笔记,包含langchain源码解读、langchain中使用中文模型、langchain实例等。
llama_index
LlamaIndex (GPT Index) is a data framework for your LLM applications
LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
local-llm-function-calling
A tool for generating function arguments and choosing what function to call with local LLMs
LongChat
Official repository for LongChat and LongEval
mistral-src
Reference implementation of Mistral AI 7B v0.1 model.
MOSS-RLHF
MOSS-RLHF
nvtop
GPUs process monitoring for AMD, Intel and NVIDIA
OpenLLaMA2
A Ray-based High-performance LLaMA2 RLHF framework
ray
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
s4
Structured state space sequence models
text-generation-inference
Large Language Model Text Generation Inference
training-code
The code we currently use to fine-tune models.
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs