TanmDL's repositories
WATT-EffNet
WATT-EffNet: A Lightweight and Accurate Model for Classifying Aerial Disaster Images
ACDC
ACDC: Online Unsupervised Cross-Domain Adaptation
ASM
( NeurIPS 2020 ) Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation
CLS-ER
The official PyTorch code for ICLR'22 Paper "Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System""
DeepSurvivalMachines
Deep Survival Machines - Fully Parametric Survival Regression
dimensions
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight https://openreview.net/forum?id=XJk19XzGq2J
few-shot-gan-adaptation
[CVPR '21] Official repository for Few-shot Image Generation via Cross-domain Correspondence
interpret
Fit interpretable models. Explain blackbox machine learning.
l2p
Learning to Prompt (L2P) for Continual Learning @ CVPR22
LIA
[ICLR 22] Latent Image Animator: Learning to Animate Images via Latent Space Navigation
mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
MEAT-TIL
CVPR2022: Meta-attention for ViT-backed Continual Learning
mlp-mixer-pytorch
An All-MLP solution for Vision, from Google AI
Projects-Solutions
:pager: Links to others' solutions to Projects (https://github.com/karan/Projects/)
pytorch-grad-cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Salehi_submitted_2020
This repository contains the codes to reproduce the results of our proposed novelty detection algorithm based on adversarially robust autoencoder.
segformer-pytorch
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch
vision-language-models-are-bows
Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
web-page
This is my webpage repository
WOODS
Benchmarks for Out-of-Distribution Generalization in Time Series Tasks