Gopal Krishna's starred repositories
cs-video-courses
List of Computer Science courses with video lectures.
RobustVideoMatting
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
imaginAIry
Pythonic AI generation of images and videos
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
monodepth2
[ICCV 2019] Monocular depth estimation from a single image
deep-rl-class
This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
lion-pytorch
🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch
Awesome-Image-Inpainting
A curated list of image inpainting and video inpainting papers and resources
DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
eye-in-the-sky
Satellite Image Classification using semantic segmentation methods in deep learning
xview-yolov3
xView 2018 Object Detection Challenge: YOLOv3 Training and Inference.
deep_imitative_models
Reimplementation (currently partial) of Deep Imitative Models paper, ICLR '20
3D-Vision-and-Touch
When told to understand the shape of a new object, the most instinctual approach is to pick it up and inspect it with your hand and eyes in tandem. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which leverages advances in graph convolutional networks. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines, especially when the object is occluded by the hand touching it; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) reconstruction quality boosts with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
Implicit-Q-Learning
PyTorch implementation of the implicit Q-learning algorithm (IQL)