There are 0 repository under visual-attention topic.
Salient Object Detection in the Deep Learning Era: An In-Depth Survey
Learning Unsupervised Video Object Segmentation through Visual Attention (CVPR19, PAMI20)
Revisiting Video Saliency: A Large-scale Benchmark and a New Model (CVPR18, PAMI19)
[TPAMI] Automatic Gaze Analysis ‘in-the-wild’: A Survey
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!
AttentionMask: Attentive, Efficient Object Proposal Generation Focusing on Small Objects (ACCV 2018, accepted as oral)
Salient Object Detection Driven by Fixation Prediction (CVPR2018)
Deep learning model for supervised video summarization called Multi Source Visual Attention (MSVA)
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Chainer implementation of Deepmind's Visual Attention Model paper
This repository contains homework assignments and projects completed for the course "Advanced Topics in Neuroscience" instructed by Dr. Ali Ghazizadeh at Sharif University of Technology.
Code for the paper 'A Biologically Inspired Visual Working Memory for Deep Networks'
COMIC: This is the code repo of our TMM2019 work titled "COMIC: Towards a Compact Image Captioning Model with Attention".
Implementation of a Multimodal Neural Network for Image Captioning in Tensorflow.
Image caption models using visual attention and reinforcement learning (The 4th place solution to the AIChallenger Contest, Image Caption Track by team xiaoquexing)
STNet: Selective Tuning of Convolutional Networks for Object Localization
Visual Attentive GAN Project
AttentionBox: Efficient Object Proposal Generation based on AttentionMask
Official Code for 'Exploring Language Prior for Mode-Sensitive Visual Attention Modeling' (ACM MM 2020)
Where do people look on images in average? At rare, thus surprising things! Let's compute them automatically
Official Implementation for NeurIPS 2023 Paper "What Do Deep Saliency Models Learn about Visual Attention"
Deep Neural Network Image Captioner using visual Attention
Visual Attention : what is salient in an image with DeepRare2019
Code for "Multiple decisions about one object involve parallel sensory acquisition but time-multiplexed evidence incorporation"
Implemenetation of 2016 paper "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention" on Flick30k dataset.
ETTO (Eye-Tracking Through Objects) and EToCVD (Eye-Tracking of Colour Vision Deficiencies) datasets are shared with all who might be interested in working on Visual Attention/Visual Saliency.
RARE2007 is a feature-engineered bottom-up salienct model only using color information (no orientation)
RARE2012 is a feature-engineered bottom-up visual attention model
Tools for the paper of IEEE Journal on Emerging and Selected Topics in Circuits and Systems: Visual Attention-Aware Omnidirectional Video Streaming Using Optimal Tiles for Virtual Reality
We present SCENE-pathy, a dataset and a set of baselines to study the visual selective attention (VSA) of people towards the 3D scene in which they are located
A model of mixed neural networks for step-by-step processing of dynamic visual scenes, activity recognition and behavioral prediciton
The analysis pipeline for our paper 'Functional connectivity fingerprints of the frontal eye field and inferior frontal junction suggest spatial versus nonspatial processing in the prefrontal cortex'.