luckym (ZhangMaom)

ZhangMaom

Geek Repo

Github PK Tool:Github PK Tool

luckym's repositories

hmr

Project page for End-to-end Recovery of Human Shape and Pose

License:NOASSERTIONStargazers:0Issues:0Issues:0

awesome-hand-pose-estimation

Awesome work on hand pose estimation/tracking

Stargazers:0Issues:0Issues:0

murauer

Implementation of the semi-supervised method for hand pose estimation introduced in our WACV 2019 paper "MURAUER: Mapping Unlabeled Real Data for Label AUstERity"

License:GPL-3.0Stargazers:0Issues:0Issues:0

SO-HandNet

Code repository for our paper entilted "SO-HandNet: Self-Organizing Network for 3D Hand Pose Estimation with Semi-supervised Learning", ICCV 2019.

Stargazers:0Issues:0Issues:0

V2V-PoseNet_RELEASE

Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018

License:MITStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

MonocularTotalCapture

Code for CVPR19 paper "Monocular Total Capture: Posing Face, Body and Hands in the Wild"

Stargazers:0Issues:0Issues:0

smplify-x

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

License:NOASSERTIONStargazers:0Issues:0Issues:0

TeachNet_Teleoperation

Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network

Stargazers:0Issues:0Issues:0

ecg_pytorch

ecg classfication

Stargazers:0Issues:0Issues:0

Awesome-ICCV2019

ICCV2019最新录用情况

Stargazers:0Issues:0Issues:0

Person_reID_baseline_pytorch

A tiny, friendly, strong pytorch implement of person re-identification baseline. Tutorial 👉https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/tutorial

License:MITStargazers:0Issues:0Issues:0

contactdb_utils

Python and ROS (C++) utilities for the ContactDB dataset

License:GPL-3.0Stargazers:0Issues:0Issues:0

fast-depth

ICRA 2019 "FastDepth: Fast Monocular Depth Estimation on Embedded Systems"

License:MITStargazers:0Issues:0Issues:0

craves.ai

CRAVES: Controlling Robotic Arm with a Vision-based, Economic System

License:GPL-3.0Stargazers:0Issues:0Issues:0

handpose

CrossInfoNet of CVPR 2019 for hand pose estimation

Stargazers:0Issues:0Issues:0

Visualizing-CNNs-for-monocular-depth-estimation

official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"

License:MITStargazers:0Issues:0Issues:0

fusenet-hand-pose

Implementation of E. Kazakos, C. Nikou, I. A. Kakadiaris. On the Fusion of RGB and Depth Information for Hand Pose Estimation, ICIP, 2018.

Stargazers:0Issues:0Issues:0

Revisiting_Single_Depth_Estimation

official implementation of "Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps with Accurate Object Boundaries"

Stargazers:0Issues:0Issues:0

Pose-REN

Demo code for "Pose Guided Structured Region Ensemble Network for Cascaded Hand Pose Estimation"

Stargazers:0Issues:0Issues:0

GASDA

Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation, CVPR 2019

Stargazers:0Issues:0Issues:0

DORN_pytorch

PyTorch implementation of Deep Ordinal Regression Network for Monocular Depth Estimation

Stargazers:0Issues:0Issues:0

craves_control

The control module of "CRAVES:Controlling Robotic Arm with a Vision-based, Economic System"

Stargazers:0Issues:0Issues:0

MAR

Pytorch code for our CVPR'19 (oral) work: Unsupervised person re-identification by soft multilabel learning

Stargazers:0Issues:0Issues:0

VisGel

[CVPR 2019] Connecting Touch and Vision via Cross-Modal Prediction

Stargazers:0Issues:0Issues:0

DukeMTMC-attribute

23 hand-annotated attributes of Duke dataset

License:MITStargazers:0Issues:0Issues:0

MonocularRGB_3D_Handpose_WACV18

Using a single RGB frame for real time 3D hand pose estimation in the wild

Stargazers:0Issues:0Issues:0

sphereHand

This project corresponds to __Self-supervised 3D hand pose estimation through training by fitting__, which is accepted in CVPR 2019.

License:MITStargazers:0Issues:0Issues:0
Language:CStargazers:0Issues:0Issues:0

tianchidasai

本项目使用的是商汤科技的mmdetection,链接如下https://github.com/open-mmlab/mmdetection 一、安装 (1)环境配置 Linux (tested on Ubuntu 16.04 and CentOS 7.2) Python 3.4+ PyTorch 1.0 Cython mmcv >= 0.2.2 (2)安装步骤 #上传的模型为已经安装后的版本,应该不需要再次安装,为保险起见,附上安装过程 先编译 cd code cd mmdetection pip install cython ./compile.sh 后安装 python(3) setup.py install # 或者用句号结尾安装 "pip install ." 二、数据准备 (1)解压数据 运行code目录下zip.py文件,解压的数据存放于data/First_round_data/目录下 (2)数据增广 运行code目录下data_augmentation.py,生成增广图集和新的标注.pkl文件,位置于mmdetection/data/coco/annotations (3)测试文件目录 运行code目录下pickle_file_creation.py,读取测试集图片名 三、模型训练 在code/mmdetection/目录下,运行 ./tools/dist_train.sh ./config/retinanet_r101_fpn_1x.py 4 --validate #其中4表示gpu数量 四、测试模型 在code/mmdetection/目录下,运行 python tools/test.py config/retinanet_r101_fpn_1x.py work_dirs/fifi/epoch_20.pth --gpus 4 --final1.pkl 运行code目录下json_to_json.py,submit/目录中将生成最终上传的json文件final.json

Stargazers:0Issues:0Issues:0