TianqiTang's repositories
Awesome-NAS
A curated list of neural architecture search (NAS) resources.
collaborative-experts
Video embeddings for retrieval with natural language queries
habitat-api-1
A modular high-level library to train embodied AI agents across a variety of tasks, environments, and simulators.
Handover_Project_Document
This repository is for the project handover and includes two projects: one focused on a multi-modal continual learning benchmark, and the other an ongoing project exploring the use of LLMs in continual learning within OpenEQA.
HouseNavAgent
Navigation agent with Bayesian relational memory in the House3D environment
Matterport3DSimulator
AI Research Platform for Reinforcement Learning from Real Panoramic Images.
Neural-SLAM
Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"
OccupancyAnticipation
This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.
pytorch-grad-cam
PyTorch implementation of Grad-CAM
pytorch-gradual-warmup-lr
Gradually-Warmup Learning Rate Scheduler for PyTorch
SC-SfMLearner-Release-1
Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video (NeurIPS 2019)
self-mono-sf
Self-Supervised Monocular Scene Flow Estimation (CVPR 2020)
splitnet-1
Code for SplitNet paper
Swin-Transformer-Semantic-Segmentation
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Semantic Segmentation.
torchinterp1d
1D interpolation for pytorch
Transformer-Explainability
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
ViLCo
Video-Language Continual Learning Benchmark