There are 4 repositories under video-diffusion-model topic.
[CSUR] A Survey on Video Diffusion Models
[CVPR 2024] Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution
Fine-Grained Open Domain Image Animation with Motion Guidance
Summary of key papers and blogs about diffusion models to learn about the topic. Detailed list of all published diffusion robotics papers.
[ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.
[ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models
Generate video from text using AI
Generate a video script, voice and a talking face completely with AI
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
🎞️ [NeurIPS'24] MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views
[arXiv 2024] Novel View Extrapolation with Video Diffusion Priors
Official implementation of UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control
The official repository of "Spectral Motion Alignment for Video Motion Transfer using Diffusion Models".
[3DV 2025] MotionDreamer: Exploring Semantic Video Diffusion features for Zero-Shot 3D Mesh Animation
Text to Video API generation documentation
IV-mixed Sampler: Using image diffusion models and video diffusion models to ensure the visual quality and the temporal coherence, respectively
Homepage for PixelDance. Paper -> https://arxiv.org/abs/2311.10982