yuhui-zh15 / VLMClassifier

Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?"

Home Page:https://yuhui-zh15.github.io/VLMClassifier-Website/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why are Visually-Grounded Language Models Bad at Image Classification?

MIT license Python Pytorch Black

This repo provides the PyTorch source code of our paper: Why are Visually-Grounded Language Models Bad at Image Classification?. Check out project page here!

๐Ÿ”ฎ Abstract

Image classification is one of the most fundamental capabilities of machine vision intelligence. In this work, we revisit the image classification task using visually-grounded language models (VLMs) such as GPT-4V and LLaVA. We find that existing proprietary and public VLMs, despite often using CLIP as a vision encoder and having many more parameters, significantly underperform CLIP on standard image classification benchmarks like ImageNet. To understand the reason, we explore several hypotheses concerning the inference algorithms, training objectives, and data processing in VLMs. Our analysis reveals that the primary cause is data-related: critical information for image classification is encoded in the VLM's latent space but can only be effectively decoded with enough training data. Specifically, there is a strong correlation between the frequency of class exposure during VLM training and instruction-tuning and the VLM's performance on those classes; when trained with sufficient data, VLMs can match the accuracy of state-of-the-art classification models. Based on these findings, we enhance a VLM by integrating classification-focused datasets into its training, and demonstrate that the enhanced classification performance of the VLM transfers to its general capabilities, resulting in an improvement of 11.8% on the newly collected ImageWikiQA dataset.

๐Ÿš€ Getting Started

Please install the required packages:

๐Ÿ“„ Reproduce Paper Results

Please look at script files in each folder to reproduce the results in the paper:

๐Ÿ’Ž Dataset: ImageWikiQA

Dataset is available at here. Corresponding images can be downloaded here.

๐ŸŽฏ Citation

If you use this repo in your research, please cite it as follows:

@article{VLMClassifier,
  title={Why are Visually-Grounded Language Models Bad at Image Classification?},
  author={Zhang, Yuhui and Unell, Alyssa and Wang, Xiaohan and Ghosh, Dhruba and Su, Yuchang and Schmidt, Ludwig and Yeung-Levy, Serena},
  journal={arXiv preprint arXiv:2405.18415},
  year={2024}
}

About

Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?"

https://yuhui-zh15.github.io/VLMClassifier-Website/


Languages

Language:Jupyter Notebook 44.4%Language:Python 36.7%Language:Shell 18.8%