chenchen's starred repositories
Developer-Books
编程开发相关书单列表整理
nlp-papers-with-arxiv
Statistics and accepted paper list of NLP conferences with arXiv link
CVPR21Chal-SLR
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.
HandyFigure
HandyFigure provides the sources file (ususally PPT files) for paper figures
MocapNET
We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance
motion-transformer
A Spatio-temporal Transformer for 3D Human Motion Prediction
docker-pytorch
A Docker image for PyTorch
awesome-causality-algorithms
An index of algorithms for learning causality with data
awesome-multimodal-ml
Reading list for research topics in multimodal machine learning
Awesome-SLP
A curated list of awesome work on Sign Language Production
ProgressiveTransformersSLP
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)
awesome-grounding
awesome grounding: A curated list of research papers in visual grounding
awesome-vision-language-pretraining-papers
Recent Advances in Vision and Language PreTrained Models (VL-PTMs)
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch