Machine Learning and Perception Lab, Georgia Tech's repositories
visdial-challenge-starter-pytorch
Starter code in PyTorch for the Visual Dialog challenge
visdial-rl
PyTorch code for Learning Cooperative Visual Dialog Agents using Deep Reinforcement Learning
lang-emerge
[EMNLP 2017] Code for "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog"
visdial-amt-chat
[CVPR 2017] AMT chat interface code used to collect the Visual Dialog dataset
vln-sim2real
Code for sim-to-real transfer of a pretrained Vision-and-Language Navigation (VLN) agent to a robot using ROS.
vln-chasing-ghosts
Code for 'Chasing Ghosts: Instruction Following as Bayesian State Tracking' published at NeurIPS 2019
vln-sim2real-envs
Code and utilities for creating a Vision-and-Language Navigation (VLN) simulator environment from a physical space.
VT-F15-ECE6504-HW2
ECE6504 Homework 2
VT-F15-ECE6504-HW1
ECE6504 Homework 1
VT-F15-ECE6504-HW4
ECE6504 Homework 4
slurm_usage_utils
Helpful functions for tracking GPU/CPU usage in SLURM managed clusters.
VT-F15-ECE6504-HW0
ECE6504 Homework 0
VT-F15-ECE6504-HW3
ECE6504 Homework 3