pengchengu's starred repositories
Video-Motion-Customization
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
torchdynamo
A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
mistral-src
Reference implementation of Mistral AI 7B v0.1 model.
Chinese-Mixtral-8x7B
中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
openai-cookbook
Examples and guides for using the OpenAI API
multimodal
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
Gemini-API
The unofficial python package that returns response of Google Gemini through cookie values.
CVPR2024-Papers-with-Code
CVPR 2024 论文和开源项目合集
latent-diffusion
High-Resolution Image Synthesis with Latent Diffusion Models
plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
fastserve-ai
Machine Learning Serving focused on GenAI with simplicity as the top priority.