Srijan Das's starred repositories
Awesome-Transformer-Attention
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
pytorchvideo
A deep learning library for video understanding research.
TimeSformer
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"
ICCV-2023-Papers
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support visual intelligence development!
SPT_LSA_ViT
Implementation of Visual Transformer for Small-size Datasets
Limited-data-vits
[WACV 2024] Code for "Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders"
PoseAwareVT
Code for the paper Seeing the Pose in the Pixels: Learning Pose-Aware Representations in Vision Transformers
2s-AGCN-For-Daily-Living
2s-AGCN on Smarthome (dataset for daily living)
Fibottention
Inceptive Visual Representation Learning with Diverse Attention Across Heads
Toyota_Smarthome
Tools for Toyota Smarthome datasets
mavrec-code
This code is provided for reproducibility of results in the paper: Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve Aerial Visual Perception?
FreqMixFormer
[ACM MM 2024] Frequency Guidance Matters: Skeletal Action Recognition by Frequency-Aware Mixed Transformer
improved_HAR_on_Toyota
Improved action recognition with Separable spatio-temporal attention using alternative Skeletal and Video pre-processing
separable_STA
Implementation of Separable Spatio-temporal attention (STA) netowork
synchronization-is-all-you-need
Synchronization is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs [ECCV, 2024]
Pyvideoresearch_new
The master PyVideoresearch is committed with few changes in this repository in order to make use of the pre-trained models.