There are 13 repositories under motion-estimation topic.
A curated list of papers & resources linked to 3D reconstruction from images.
Optical Flow Prediction with TensorFlow. Implements "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume," by Deqing Sun et al. (CVPR 2018)
[ECCV2020 Oral] Learning Lane Graph Representations for Motion Forecasting
https://nanonets.com/blog/optical-flow/
主打解析编码器内部逻辑和参数说明,从基础到全网没人讲的算法,没人画的图解,没人做的排版整理全都在此集齐;因此叫Ultimate Tutorial
Motion R-CNN: Mask R-CNN with support for 3D motion estimation (prototype)
Demo for "MoSculp: Interactive Visualization of Shape and Time"
[MICCAI'18] Joint Learning of Motion Estimation and Segmentation for Cardiac MR Image Sequences
Visual Computing : Markerless Motion and/or Pose and/or Face detection and/or tracking and it's 3D reconstruction (in real time)
Blind deconvolution of motion blur
:movie_camera: Prototype of 3D object tracking via camera
[ECCV 2022 Oral] Source code for "A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow"
[ICCV 2023] Learning Fine-Grained Features for Pixel-wise Video Correspondences
Video stabilizer that smoothes camera motion.
This repository is about video compression, and more specifically about the motion estimation block (ME block) of a video encoder. It is a research project for developing an efficient motion estimation algorithm, so that the video compression technology can keep pace with the high frame rate videos and high resolution videos.
Fullsearch based Motion Estimation Processor written in Verilog-HDL
Collection of discrete- and continuous-time motion parametrizations.
Assignments for the Coursera course by UPenn
Implementation of ChipQA (https://ieeexplore.ieee.org/document/9540785)
CVPR 2018: Real-World Repetition Estimation by Div, Grad and Curl
Kalman Filter Implementation for object tracking and motion estimation
Synthesizing multi-mode handwriting motion with kinematics features
Time-to-contact map by joint estimation of up-to-scale inverse depth and global motion using a single event camera