There are 2 repositories under pose-tracking topic.
š„SLAM, VIsual localization, keypoint detection, Image matching, Pose/Object tracking, Depth/Disparity/Flow Estimation, 3D-graphic, etc. related papers and code
LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking
[IROS 2021] BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models
[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).
Quickly add MediaPipe Pose Estimation and Detection to your iOS app. Enable powerful features in your app powered by the body or hand.
PoseNet Using Unity MLAgents Barracuda Engine
[BMVC 2019] Code for "SRN: Stacked Regression Network for Real-time 3D Hand Pose Estimation"
Headless DeepLabCut (no GUI support)
OpenPifPaf plugin for Posetrack
Mediapipe Pose Tracking AAR (android) Example
Unsupervised way for Pose guided Anime Video Generation using Generative Adversarial Networks.
Deep Learning Approach for UWB Applications (IEEE Journal of Biomedical and Health Informatics)
A native android application for pose estimation and detection using mlkit. The app can detect poses and recognize the pose made by the user.
BlazePose: Body Segmentation for TFJS and NodeJS
Realtime pose detection in Unity Engine with NatML.
Human Pose Detection by Google MediaPipe using flask and OpenCV.
Tools for machine learning of animal behavior
Implementing the ML Kit Pose Detection on stored video file
Realtime pose landmark detection with BlazePoseBarracuda in Unity
Example of pose recognition and workout repetition counting using Mediapipe framework
This repository contains a Unity project for a Mixed-Reality based drone controller. The Mixed-Reality device used in this project is a Hololens2.
Virtual-Inertial SLAM: VI-SLAM performance evaluations in virtual environments using real inertial data
The official repo for the extension of [NeurIPS'22] "APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking": https://github.com/pandorgan/APT-36K
This model detects and tracks the pose of the human through Image as well as Video using Computer Vision.
Resources for SiTAR, a situated trajectory analysis system for AR which provides in-the-wild pose error estimates
Push-up-Counter-with-Python-Mediapipe
BehavioralGPT
Code for robust visual pose estimation pipeline (end-to-end) for Spot with minimal input requirements.
ECE445 Senior Design: An AI gym robot for exercise guidance