There are 10 repositories under motion-generation topic.
[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
HumanML3D: A large and diverse 3d human motion-language dataset.
Official implementation of "MoMask: Generative Masked Modeling of 3D Human Motions (CVPR2024)"
[CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model
Official implementation for "Generating Diverse and Natural 3D Human Motions from Texts (CVPR2022)."
[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)
[ICCV-2023] Official code for work "HumanMAC: Masked Motion Completion for Human Motion Prediction".
List of recent advances for human avatars, including generation, reconstruction, and editing, etc.
[ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation
[CVPR 2024] Official Implementation of "Seamless Human Motion Composition with Blended Positional Encodings".
Official implementations for "Action2Motion: Conditioned Generation of 3D Human Motions (ACM MultiMedia 2020)"
[Open-source Project] UniMoCap: community implementation to unify the text-motion datasets (HumanML3D, KIT-ML, and BABEL) and whole-body motion dataset (Motion-X).
Official implementation of the NeurIPS22 paper "HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes"
🕹️ Official Implementation of Conditional Motion In-betweening (CMIB) 🏃
[NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs
motion_generate_tools is a Blender addon for generate motion using MDM: Human Motion Diffusion Model.
Official implementation of "TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV2022)"
🎶 Music-Driven Conducting Motion Generation (IEEE ICME'21 Best Demo)
🔥 Motion Mamba: Efficient and Long Sequence Motion Generation with Hierarchical and Bidirectional Selective SSM
[ECCV 2022] SAGA: Stochastic Whole-Body Grasping with Contact
📖 Paper: Robust Motion In-betweening 🏃
SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark
This is an open collection of state-of-the-art (SOTA), novel Text to X (X can be everything) methods (papers, codes and datasets).
Official implementation of CVPR24 highlight paper "Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance"
[CVPRW 2024] Official Implementation of "in2IN: Leveraging individual Information to Generate Human INteractions".
Official implementation of ICCV 2023 Oral Paper "Role-Aware Interaction Generation from Textual Description"
Code & demo for the animation of still facial landmarks from an initial pose.