ldwang's repositories
ComfyUI
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
DiT
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
Firefly
Firefly(流萤): 中文对话式大语言模型
FlagScale
FlagScale is a large model toolkit based on open-sourced projects.
how-to-optim-algorithm-in-cuda
how to optimize some algorithm in cuda.
Latte
Latte: Latent Diffusion Transformer for Video Generation.
LESS
Preprint: Less: Selecting Influential Data for Targeted Instruction Tuning
LLaMA-Factory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
LLaMA-Pro
Progressive LLaMA with Block Expansion.
llm-foundry
LLM training code for MosaicML foundation models
LLMTest_NeedleInAHaystack
Doing simple retrieval from LLM models at various context lengths to measure accuracy
megalodon
Reference implementation of Megalodon 7B model
mergekit
Tools for merging pretrained large language models.
Open-Sora-Plan
This project aim to reproducing Sora (Open AI T2V model), but we only have limited resource. We deeply wish the all open source community can contribute to this project.
open_clip
An open source implementation of CLIP.
opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
QAnything
Question and Answer based on Anything.
quiet-star
Code for Quiet-STaR
QuRating
Select LM Training Data Based on Qualitative Aspects of Text
qwen-vllm
通义千问VLLM推理部署DEMO
Qwen1.5
Qwen1.5 is the improved version of Qwen, the large language model series developed by Qwen team, Alibaba Cloud.
SiT
Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"
stable-diffusion
A latent text-to-image diffusion model
Video-LLaVA
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs