There are 19 repositories under attention topic.
Machine learning, in numpy
Natural Language Processing Tutorial for Deep Learning Researchers
A PyTorch implementation of the Transformer model in "Attention is All You Need".
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
该仓库主要记录 NLP 算法工程师相关的顶会论文研读笔记
Draw a leader line in your web page.
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
This is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
Gathers machine learning and Tensorflow deep learning models for NLP problems, 1.13 < Tensorflow < 2.0
Aspect Based Sentiment Analysis, PyTorch Implementations. 基于方面的情感分析，使用PyTorch实现。
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute
Documents, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation)，etc. All codes are implemented intensorflow 2.0.
Improving Convolutional Networks via Attention Transfer (ICLR 2017)
Scenic: A Jax Library for Computer Vision Research and Beyond
Generative Adversarial Transformers
:punch: Reproduce simple cv project including attention module, classification, object detection, segmentation, keypoint detection, tracking :smile: etc.
A Tensorflow implementation of Spatial Transformer Networks
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
list of efficient attention modules
An implementation of Performer, a linear attention-based transformer, in Pytorch
Tensorflow implementation of attention mechanism for text classification tasks.
Implementation of papers for text classification task on DBpedia
Voice activity detection (VAD) toolkit including DNN, bDNN, LSTM and ACAM based VAD. We also provide our directly recorded dataset.
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Simple implementations of NLP models. Tutorials are written in Chinese on my website https://mofanpy.com
Code for our CVPR2021 paper coordinate attention
A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text
End-to-end ASR/LM implementation with PyTorch
Official Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" - MICCAI 2021
Residual Attention Network for Image Classification
Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
Bilinear attention networks for visual question answering
A Structured Self-attentive Sentence Embedding