There are 4 repositories under transformer-models topic.
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Fast inference engine for Transformer models
Flops counter for convolutional networks in pytorch framework
[NeurIPS‘2021] "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up", Yifan Jiang, Shiyu Chang, Zhangyang Wang
solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
A curated list of foundation models for vision and language tasks
Universal Graph Transformer Self-Attention Networks (TheWebConf WWW 2022) (Pytorch and Tensorflow)
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
[VLDB'22] Anomaly Detection using Transformers, self-conditioning and adversarial training.
🌕 [BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
How to use our public wav2vec2 dimensional emotion model
FlashAttention (Metal Port)
Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive summarization datasets to the extractive task.
LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.
Efficient Inference of Transformer models
The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"
Based on the Pytorch-Transformers library by HuggingFace. To be used as a starting point for employing Transformer models in text classification tasks. Contains code to easily train BERT, XLNet, RoBERTa, and XLM models for text classification.
Punctuation Restoration using Transformer Models for High-and Low-Resource Languages
Pytorch implementation of Multimodal Fusion Transformer for Remote Sensing Image Classification.
The official code repo for "Zero-shot Audio Source Separation through Query-based Learning from Weakly-labeled Data", in AAAI 2022
Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.
ShadowFormer (AAAI2023), Pytorch implementation
Visualizing query-key interactions in language + vision transformers
Official implementation of the CVPR 2024 paper "FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression Features"
SignNet and BasisNet
My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"
Stock price prediction using a Temporal Fusion Transformer
This is the official implementation of our paper "Hypergraph Transformer for Skeleton-based Action Recognition."
pytorch下基于transformer / LSTM模型的彩票预测
[CVPR 2024] CFAT: Unleashing Triangular Windows for Image Super-resolution