sabirdvd / BLIP_image_caption_demo

BLIP image caption demo - medium post blog

Home Page:http://bit.ly/3KLX9c0

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

BLIP image caption extended demo

Please refer to this medium blog post for more detail.

The original paper and colab

arXiv Open In Collab

For image captioning only with the Larger model with the two proposed caption generation methods (beam search and nucleus sampling), that runs on your local machine with multiple images:

Open In Collab

conda create -n BLIP_demo python=3.7 anaconda
conda activate BLIP_demo
git clone https://github.com/salesforce/BLIP
pip install -r requirements.txt

git clone https://github.com/sabirdvd/BLIP_image_caption_demo.git
cp caption_Inference_L.py ../
python caption_Inference_L.py

For BLIP 2

pip install salesforce-lavis
BILP-2_caption_Inference_2.7B.py

Note that BLIP-2 (can't run on Colab) only runs on large GPU A100 GPU, pls find the output BLIP_2_2.7b.json

For COCO Caption Karpathy test (image caption dataset COCO benchmark) (my run using the L_check_point)

Download COCO-caption metrics from here

python coco_eval_run.py
model B1 B2 B3 B4 M C S
BLIP_ViT-L Nucleus Sampling 0.660 0.456 0.308 0.205 0.239 0.869 0.190
BLIP_ViT-L paper result (BS) 0.797 0.649 0.514 0.403 0.311 1.365 0.243
BLIP2_ViG-OPT2.7B paper result (BS) 0.831 0.691 0.555 0.438 0.317 1.460 0.252

Please refer to the original work for main information

https://github.com/salesforce/BLIP

About

BLIP image caption demo - medium post blog

http://bit.ly/3KLX9c0


Languages

Language:Jupyter Notebook 97.9%Language:Python 2.1%