There are 2 repositories under stable-video-diffusion topic.
OneDiff: An out-of-the-box acceleration library for diffusion models.
Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.
🎞️ [NeurIPS'24] MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views
Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation
stable-video-diffusion-webui, img to videos| 图片生成视频
Educational repository for applying the main video data curation techniques presented in the Stable Video Diffusion paper.
Consistency Distillation with Target Timestep Selection and Decoupled Guidance
👆Pytorch implementation of "Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion"
Help Elon Musk Launch a Rocket 🚀
Created with Stability AIʼs Stable Video Diffusion XT 1.1 Image-to-Video latent diffusion model (SVD XT 1.1)
Deploy and invoke Stability AI's Stable Video Diffusion XT (SVT-XT) 1.1 foundation model on Amazon SageMaker.
Stable Video Diffusion running on Replicate inference endpoint.
A simple gui interface to Stable Video Diffusion
Stable video diffusion (img2vid) as a Cog model
Generative Models by Stability AI (bugfixes & optimizations for low VRAM Stable Video)