There are 2 repositories under efficient-transformers topic.
Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"
Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
[CVPR 2023] IMP: iterative matching and pose estimation with transformer-based recurrent module
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
[ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Liu, Zhangyang Wang
Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre-train from scratch. We investigated if multilingual models could inherit these properties by making it an Efficient Transformer (s.a. the Longformer architecture).
This repository contains the official code for Energy Transformer---an efficient Energy-based Transformer variant for graph classification
Official Implementation of Energy Transformer in PyTorch for Mask Image Reconstruction
A custom Tensorflow implementation of Google's Electra NLP model with compositional embeddings using complementary partitions
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
Nonparametric Modern Hopfield Models
Gated Attention Unit (TensorFlow implementation)
Pytorch implementation of LISA (Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation. WWW 2021)