There are 14 repositories under attention-model topic.
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
Keras Attention Layer (Luong and Bahdanau scores).
Multilingual Automatic Speech Recognition with word-level timestamps and confidence
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation
Awesome List of Attention Modules and Plug&Play Modules in Computer Vision
Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
Implementation of the Swin Transformer in PyTorch.
Text classification using deep learning models in Pytorch
A PyTorch library for all things Reinforcement Learning (RL) for Combinatorial Optimization (CO)
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Camouflaged Object Detection, CVPR 2020 (Oral)
A Structured Self-attentive Sentence Embedding
A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
Attention OCR Based On Tensorflow
This repository implements the the encoder and decoder model with attention model for OCR
Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
PyTorch implementation of batched bi-RNN encoder and attention-decoder.
Bidirectional LSTM network for speech emotion recognition.
Code/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"
Seq2SeqSharp is a tensor based fast & flexible deep neural network framework written by .NET (C#). It has many highlighted features, such as automatic differentiation, different network types (Transformer, LSTM, BiLSTM and so on), multi-GPUs supported, cross-platforms (Windows, Linux, x86, x64, ARM), multimodal model for text and images and so on.
code of Relation Classification via Multi-Level Attention CNNs
attention model for entailment on SNLI corpus implemented in Tensorflow and Keras
Code & data accompanying the NAACL 2019 paper "Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases"
Soft attention mechanism for video caption generation
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
ECG Classification
This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention etc in Pytorch, Tensorflow, Keras
A Keras-based library for analysis of time series data using deep learning algorithms.
Object Detection on Radar sensor and RGB camera images. https://ieeexplore.ieee.org/document/9191046 Full Thesis : RADAR+RGB Fusion for Robust Object Detection in Autonomous Vehicles. Zenodo. https://doi.org/10.5281/zenodo.13738235
Image Captioning based on Bottom-Up and Top-Down Attention model
Pytorch implementation of Unsupervised Attention-guided Image-to-Image Translation.
[CVPR 2024] CFAT: Unleashing Triangular Windows for Image Super-resolution