There are 16 repositories under attention-visualization topic.
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
Multilingual Automatic Speech Recognition with word-level timestamps and confidence
KoBERT와 CRF로 만든 한국어 개체명인식기 (BERT+CRF based Named Entity Recognition model for Korean)
Plot the vector graph of attention based text visualisation
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Comparatively fine-tuning pretrained BERT models on downstream, text classification tasks with different architectural configurations in PyTorch.
Visualizing query-key interactions in language + vision transformers
Visualization for simple attention and Google's multi-head attention.
Summary of Transformer applications for computer vision tasks.
my codes for learning attention mechanism
🚀 Cross attention map tools for huggingface/diffusers
(ECCV2020) Tensorflow implementation of A Generic Visualization Approach for Convolutional Neural Networks
Lightweight visualization tool for neural attention mechanisms
Implemented image caption generation method propossed in Show, Attend, and Tell paper using the Fastai framework to describe the content of images. Achieved 24 BLEU score for Beam search size of 5. Designed a Web application for model deployment using the Flask framework.
attention mechanism in keras, like Dense and RNN...
Easy-to-read implementation of self-supervised learning using vision transformer and knowledge distillation with no labels - DINO :smiley:
PyTorch implementation of the End-to-End Memory Network with attention layer vizualisation support.
A Pytorch implementation of the paper 'Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering'
Transfer learning pretrained vision transformers for breast histopathology
Encoder-Decoder CNN-LSTM Model with an attention mechanism for image captioning. Trained using the Microsoft COCO Dataset.
Multimodal Bi-Transformers (MMBT) in Biomedical Text/Image Classification
Attention weight visualisation for sentiment analysis.
This project presents Attention-enhanced Multi-channel Recurrent Convolutional Network (AMRCN), for explainable fake news detection.
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Implementation of GRU-based Encoder-Decoder Architecture with Bahdanau Attention Mechanism for Machine Translation from German to English.
Extract explainablity from RoBERTa 🪆 ad Born 🐈 while classifying depresson 🎭
Shopify-Pipedrive Integration This project provides a JavaScript program that integrates Shopify and Pipedrive, allowing you to automate the process of creating deals in Pipedrive based on Shopify orders. The program follows a series of steps to fetch data from Shopify and Pipedrive, create or update records, and establish connections between them.
Implementation of Visual Attention (ViT) for Image Classification using pytorch
Creating word-weighted heatmap's latex script given a list of tokens and attention scores