There are 20 repositories under visual-question-answering topic.
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
A collection of resources on applications of multi-modal learning in medical imaging.
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
A lightweight, scalable, and general framework for visual question answering research
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".
Strong baseline for visual question answering
a collection of computer vision projects&tools. 计算机视觉方向项目和工具集合。
[AAAI 2024] NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario.
[NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17
TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering
[NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models
Document Visual Question Answering
[ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER adversarial training part
A pytorch implementation for "A simple neural network module for relational reasoning", working on the CLEVR dataset
CNN+LSTM, Attention based, and MUTAN-based models for Visual Question Answering
Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)
[Paper][ISWC 2021] Zero-shot Visual Question Answering using Knowledge Graph
Bottom-up features extractor implemented in PyTorch.
[ICCV 2021] Official implementation of the paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering"
PyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind People
AIOZ AI - Overcoming Data Limitation in Medical Visual Question Answering (MICCAI 2019)
[ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"