zhoudaquan's starred repositories

Megatron-LM

Ongoing research training transformer models at scale

Language:PythonLicense:NOASSERTIONStargazers:9712Issues:0Issues:0

ComfyUI_StoryDiffusion

You can using StoryDiffusion in ComfyUI

Language:PythonLicense:Apache-2.0Stargazers:117Issues:0Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:25072Issues:0Issues:0

mem0

The memory layer for Personalized AI

Language:PythonLicense:Apache-2.0Stargazers:19831Issues:0Issues:0

Paints-UNDO

Understand Human Behavior to Align True Needs

Language:PythonLicense:Apache-2.0Stargazers:3185Issues:0Issues:0

Kolors

Kolors Team

Language:PythonLicense:Apache-2.0Stargazers:3076Issues:0Issues:0

torchtitan

A native PyTorch Library for large model training

Language:PythonLicense:BSD-3-ClauseStargazers:1481Issues:0Issues:0

lloco

The official repo for "LLoCo: Learning Long Contexts Offline"

Language:PythonLicense:MITStargazers:104Issues:0Issues:0

ChronoMagic-Bench

ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation

Language:PythonLicense:Apache-2.0Stargazers:164Issues:0Issues:0

cambrian

Cambrian-1 is a family of multimodal LLMs with a vision-centric design.

Language:PythonLicense:Apache-2.0Stargazers:1660Issues:0Issues:0

hallo

Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation

Language:PythonLicense:MITStargazers:8286Issues:0Issues:0

LLM101n

LLM101n: Let's build a Storyteller

Stargazers:27090Issues:0Issues:0

Awesome-LLM-Compression

Awesome LLM compression research papers and tools.

License:MITStargazers:979Issues:0Issues:0

gpt4all

GPT4All: Chat with Local LLMs on Any Device

Language:C++License:MITStargazers:68621Issues:0Issues:0

champ

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

Language:PythonLicense:MITStargazers:3547Issues:0Issues:0

S-LoRA

S-LoRA: Serving Thousands of Concurrent LoRA Adapters

Language:PythonLicense:Apache-2.0Stargazers:1666Issues:0Issues:0

MoRF

Receptive field as experts

License:MITStargazers:5Issues:0Issues:0

Unique3D

Official implementation of Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

Language:PythonLicense:MITStargazers:2748Issues:0Issues:0

llama3-from-scratch

llama3 implementation one matrix multiplication at a time

Language:Jupyter NotebookLicense:MITStargazers:12354Issues:0Issues:0

RectifiedFlow

Official Implementation of Rectified Flow (ICLR2023 Spotlight)

Language:PythonStargazers:764Issues:0Issues:0

efficient-kan

An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).

Language:PythonLicense:MITStargazers:3740Issues:0Issues:0

q-diffusion

[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.

Language:PythonLicense:MITStargazers:309Issues:0Issues:0

SpeeD

SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training

Language:PythonLicense:Apache-2.0Stargazers:136Issues:0Issues:0

StoryDiffusion

Create Magic Story!

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:5682Issues:0Issues:0

PLLaVA

Official repository for the paper PLLaVA

Language:PythonStargazers:529Issues:0Issues:0

corenet

CoreNet: A library for training deep neural networks

Language:PythonLicense:NOASSERTIONStargazers:6882Issues:0Issues:0

VAR

[GPT beats diffusionšŸ”„] [scaling laws in visual generationšŸ“ˆ] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!

Language:PythonLicense:MITStargazers:3935Issues:0Issues:0

MLLM_Factory

A Dead Simple and Modularized Multi-Modal Training and Finetune Framework. Compatible to any LLaVA/Flamingo/QwenVL/MiniGemini etc series models.

Stargazers:17Issues:0Issues:0

motion-diffusion-model

The official PyTorch implementation of the paper "Human Motion Diffusion Model"

Language:PythonLicense:MITStargazers:3026Issues:0Issues:0

co-tracker

CoTracker is a model for tracking any point (pixel) on a video.

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:2632Issues:0Issues:0