Jeremy_cui (jiemingcui)

jiemingcui

Geek Repo

Company:Peking University

Location:Beijing

Home Page:https://jiemingcui.github.io/

Github PK Tool:Github PK Tool

Jeremy_cui's starred repositories

CAMDM

(SIGGRAPH 2024) Official repository for "Taming Diffusion Probabilistic Models for Character Control"

Language:C#Stargazers:166Issues:0Issues:0

CLoSD

CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control

License:MITStargazers:17Issues:0Issues:0

CooHOI

[NeurIPS 2024 Spotlight] CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics

Stargazers:13Issues:0Issues:0

dial-mpc

Official implementation for the paper "Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing". DIAL-MPC is a novel sampling-based MPC framework for legged robot full-order torque-level control with both precision and agility in a training-free manner.

Language:PythonLicense:Apache-2.0Stargazers:331Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:205Issues:0Issues:0

human2humanoid

[IROS 2024] Learning Human-to-Humanoid Real-Time Whole-Body Teleoperation. [CoRL 2024] OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning

Language:PythonStargazers:167Issues:0Issues:0

SMPLSim

Simulating SMPL humanoid, supporting PHC/PHC-MJX/PULSE/SimXR code bases.

Language:PythonLicense:BSD-3-ClauseStargazers:119Issues:0Issues:0

PacerPlus

Official implementation of the paper "PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios" (CVPR 2024).

Language:PythonStargazers:55Issues:0Issues:0

GVHMR

Code for "GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates", Siggraph Asia 2024

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:364Issues:0Issues:0

clip-retrieval

Easily compute clip embeddings and build a clip retrieval system with them

Language:Jupyter NotebookLicense:MITStargazers:2374Issues:0Issues:0

awesome-ai-agents

A list of AI autonomous agents

License:NOASSERTIONStargazers:10546Issues:0Issues:0

expressive-humanoid

[RSS 2024]: Expressive Whole-Body Control for Humanoid Robots

Language:PythonLicense:NOASSERTIONStargazers:161Issues:0Issues:0

robot-utility-models

Robot Utility Models are trained on a diverse set of environments and objects, and then can be deployed in novel environments with novel objects without any further data or training.

Language:PythonLicense:MITStargazers:154Issues:0Issues:0

PR2-Platform

PR2 is a humanoid robot testbed designed for both entry-level students and professional users with supports in bipedal locomotion, multi-modal manipulation, and interaction with vision and language foundation models.

Language:PythonLicense:NOASSERTIONStargazers:14Issues:0Issues:0

Birds-eye-view-Perception

[IEEE T-PAMI] Awesome BEV perception research and cookbook for all level audience in autonomous diriving

Language:PythonLicense:Apache-2.0Stargazers:1182Issues:0Issues:0

ReKep

ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation

Language:PythonStargazers:429Issues:0Issues:0

SkillMimic

Official code release for the paper "SkillMimic: Learning Reusable Basketball Skills from Demonstrations"

Language:PythonLicense:Apache-2.0Stargazers:159Issues:0Issues:0

OmniIsaacGymEnvs

Reinforcement Learning Environments for Omniverse Isaac Gym

Language:PythonLicense:NOASSERTIONStargazers:839Issues:0Issues:0

ManiSkill

SAPIEN Manipulation Skill Framework, a GPU parallelized robotics simulator and benchmark

Language:PythonLicense:Apache-2.0Stargazers:781Issues:0Issues:0

Point-SAM

Point-SAM: This is the official repository of "Point-SAM: Promptable 3D Segmentation Model for Point Clouds". We provide codes for running our demo and links to download checkpoints.

Language:PythonLicense:MITStargazers:124Issues:0Issues:0

Awesome-Robotics-3D

A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites

Stargazers:489Issues:0Issues:0

BlenderGPT

Use commands in English to control Blender with OpenAI's GPT-4

Language:PythonLicense:MITStargazers:4419Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:376Issues:0Issues:0

Grounded-SAM-2

Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:849Issues:0Issues:0

SpacetimeGaussians

[CVPR 2024] Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis

Language:PythonLicense:NOASSERTIONStargazers:569Issues:0Issues:0

Embodied_AI_Paper_List

[Embodied-AI-Survey-2024] Paper list and projects for Embodied AI

Stargazers:590Issues:0Issues:0

PQ3D

Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"

Language:PythonLicense:MITStargazers:43Issues:0Issues:0

LEGENT

Open Platform for Embodied Agents

Language:PythonLicense:Apache-2.0Stargazers:252Issues:0Issues:0

umi-on-legs

UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers

Language:PythonLicense:MITStargazers:180Issues:0Issues:0

MambaVision

Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone

Language:PythonLicense:NOASSERTIONStargazers:741Issues:0Issues:0