qihao._.huang's repositories
machine-configuration
configuration for a new ubuntu/mac machine
cpp-python-socket
Simple socket communication between C++ and Python
real-time-face-sentimental-app
Real-time sentimental analysis on Android mobile platform
50projects50days
50+ mini web projects using HTML, CSS & JS
AdelaiDepth
This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.
awesome-multiple-object-tracking
Paper list for multiple object tracking
Awesome-Optical-Flow
This is a list of awesome paper about optical flow and related work.
awesome-radar-perception
A curated list of radar datasets, detection, tracking and fusion
awesome-state-of-depth-completion
Current state of supervised and unsupervised depth completion methods
byteps
A high performance and generic framework for distributed DNN training
cocoapi
COCO API Customized for YouTubeVIS evaluation
GeoNet
Code for GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose (CVPR 2018)
minitorch
The full minitorch student suite.
mmdetection
OpenMMLab Detection Toolbox and Benchmark
MMP_Track1_ICCV21
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.
mobile-vision
Mobile vision models and code
monocular-depth-prediction
monocular depth for the project of Computational Photography
pytorch-distributed
A quickstart and benchmark for pytorch distributed training.
pytorch-mgpu-cifar10
testing multi gpu for pytorch
PyTorch-Universal-Docker-Template
Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.
TrackEval
HOTA (and other) evaluation metrics for Multi-Object Tracking (MOT).
UniTrack
Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).