There are 42 repositories under attention-mechanism topic.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
all kinds of text classification models and more with deep learning
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
A collection of important graph embedding, classification and representation learning papers with implementations.
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
A simple but complete full-attention transformer with a set of promising experimental features from various papers
A TensorFlow Implementation of the Transformer: Attention Is All You Need
Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)
Keras Attention Layer (Luong and Bahdanau scores).
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Reformer, the efficient Transformer, in Pytorch
Multilingual Automatic Speech Recognition with word-level timestamps and confidence
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute
To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Text classifier for Hierarchical Attention Networks for Document Classification
An implementation of Performer, a linear attention-based transformer, in Pytorch
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch
A curated list of NLP resources focused on Transformer networks, attention mechanism, GPT, BERT, ChatGPT, LLMs, and transfer learning.
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
TensorFlow Implementation of "Show, Attend and Tell"
pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
Visualizing RNNs using the attention mechanism
based on yolo-high-level project (detect\pose\classify\segment\):include yolov5\yolov7\yolov8\ core ,improvement research ,SwintransformV2 and Attention Series. training skills, business customization, engineering deployment C