InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks —— An Open-Source Alternative to ViT-22B
2024/01/27
: We release 448 resolution model, achieving 76.6 on MMBench dev, see here.2024/01/24
: InternVL-Chat-V1.1 is released, it supports Chinese and has stronger OCR capability, see here or try our demo.2024/01/16
: We release our customized mmcv/mmsegmentation/mmdetection code, integrated with DeepSpeed, which can be used for training large-scale object detection and semantic segmentation models.
[Paper] [Chat Demo] [Quick Start] [中文解读]
InternVL scales up the ViT to 6B parameters and aligns it with LLM.
It is the largest open-source vision/vision-language foundation model (14B) to date, achieving 32 state-of-the-art performances on a wide range of tasks such as visual perception, cross-modal retrieval, multimodal dialogue, etc.
![image](https://private-user-images.githubusercontent.com/23737120/300012909-47878df8-2aec-446e-8a58-00640a2e1327.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTY1NTMxOTMsIm5iZiI6MTcxNjU1Mjg5MywicGF0aCI6Ii8yMzczNzEyMC8zMDAwMTI5MDktNDc4NzhkZjgtMmFlYy00NDZlLThhNTgtMDA2NDBhMmUxMzI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA1MjQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNTI0VDEyMTQ1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTgxYjkyMTc5NDk4YjRiMTRkZjMzZmNhY2I1MTFlYWNjOGExZWExMzQ5N2RkMTA4MjM5NDNjODliMjZiODI2MmImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.VaLJbPErvBG49VGqN3pVnZ_FCLlALFTaVkGe7A_gfNU)
Model | Date | Download | Note |
---|---|---|---|
InternViT-6B-224px | 2023.12.22 | 🤗 HF link | vision foundation model |
InternVL-14B-224px | 2023.12.22 | 🤗 HF link | vision-language foundation model |
InternVL-Chat-13B | 2023.12.25 | 🤗 HF link | English multimodal dialogue |
InternVL-Chat-19B | 2023.12.25 | 🤗 HF link | English multimodal dialogue |
InternVL-Chat-V1.1 | 2024.01.24 | 🤗 HF link | support Chinese and stronger OCR |
InternViT-6B-448px | 2024.01.30 | 🤗 HF link | 448 resolution |
Visual Perception (click to expand)
-
Linear-Probe Image Classification [see details]
ViT-22B uses the private JFT-3B dataset.
method #param IN-1K IN-ReaL IN-V2 IN-A IN-R IN-Sketch OpenCLIP-G 1.8B 86.2 89.4 77.2 63.8 87.8 66.4 DINOv2-g 1.1B 86.5 89.6 78.4 75.9 78.8 62.5 EVA-01-CLIP-g 1.1B 86.5 89.3 77.4 70.5 87.7 63.1 MAWS-ViT-6.5B 6.5B 87.8 - - - - - ViT-22B* 21.7B 89.5 90.9 83.2 83.8 87.4 − InternViT-6B (ours) 5.9B 88.2 90.4 79.9 77.5 89.8 69.1 -
Semantic Segmentation [see details]
method decoder #param (train/total) crop size mIoU OpenCLIP-G (frozen) Linear 0.3M / 1.8B 512 39.3 ViT-22B (frozen) Linear 0.9M / 21.7B 504 34.6 InternViT-6B (frozen) Linear 0.5M / 5.9B 504 47.2 (+12.6) ViT-22B (frozen) UperNet 0.8B / 22.5B 504 52.7 InternViT-6B (frozen) UperNet 0.4B / 6.3B 504 54.9 (+2.2) ViT-22B UperNet 22.5B / 22.5B 504 55.3 InternViT-6B UperNet 6.3B / 6.3B 504 58.9 (+3.6) -
Zero-Shot Image Classification [see details]
method IN-1K IN-A IN-R IN-V2 IN-Sketch ObjectNet OpenCLIP-G 80.1 69.3 92.1 73.6 68.9 73.0 EVA-02-CLIP-E+ 82.0 82.1 94.5 75.7 71.6 79.6 ViT-22B* 85.9 90.1 96.0 80.9 − 87.6 InternVL-C (ours) 83.2 83.8 95.5 77.3 73.9 80.6 -
Multilingual Zero-Shot Image Classification [see details]
EN: English, ZH: Chinese, JP: Japanese, Ar: Arabic, IT: Italian
method IN-1K (EN) IN-1K (ZH) IN-1K (JP) IN-1K (AR) IN-1K (IT) Taiyi-CLIP-ViT-H - 54.4 - - - WuKong-ViT-L-G - 57.5 - - - CN-CLIP-ViT-H - 59.6 - - - AltCLIP-ViT-L 74.5 59.6 - - - EVA-02-CLIP-E+ 82.0 - - - 41.2 OpenCLIP-XLM-R-H 77.0 55.7 53.1 37.0 56.8 InternVL-C (ours) 83.2 64.5 61.5 44.9 65.7 -
Zero-Shot Video Classification [see details]
method #frame K400 K600 K700 OpenCLIP-G 1 65.9 66.1 59.2 EVA-02-CLIP-E+ 1 69.8 69.3 63.4 InternVL-C (ours) 1 71.0 71.3 65.7 ViCLIP 8 75.7 73.5 66.4 InternVL-C (ours) 8 79.4 78.8 71.5
Cross-Modal Retrieval (click to expand)
-
English Zero-Shot Image-Text Retrieval [see details]
model Flickr30K COCO avg image-to-text text-to-image image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 OpenCLIP-G 92.9 99.3 99.8 79.5 95.0 97.1 67.3 86.9 92.6 51.4 74.9 83.0 85.0 EVA-02-CLIP-E+ 93.9 99.4 99.8 78.8 94.2 96.8 68.8 87.8 92.8 51.1 75.0 82.7 85.1 InternVL-C (ours) 94.7 99.6 99.9 81.7 96.0 98.2 70.6 89.0 93.5 54.1 77.3 84.6 86.6 InternVL-G (ours) 95.7 99.7 99.9 85.0 97.0 98.6 74.9 91.3 95.2 58.6 81.3 88.0 88.8 -
Chinese Zero-Shot Image-Text Retrieval [see details]
model Flickr30K-CN COCO-CN avg image-to-text text-to-image image-to-text text-to-image R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 CN-CLIP-ViT-H 81.6 97.5 98.8 71.2 91.4 95.5 63.0 86.6 92.9 69.2 89.9 96.1 86.1 OpenCLIP-XLM-R-H 86.1 97.5 99.2 71.0 90.5 94.9 70.0 91.5 97.0 66.1 90.8 96.0 87.6 InternVL-C (ours) 90.3 98.8 99.7 75.1 92.9 96.4 68.8 92.0 96.7 68.9 91.9 96.5 89.0 InternVL-G (ours) 92.9 99.4 99.8 77.7 94.8 97.3 71.4 93.9 97.7 73.8 94.4 98.1 90.9 -
Multilingual Zero-Shot Image-Text Retrieval on XTD [see details]
method EN ES FR ZH IT KO RU JP average AltCLIP 95.4 94.1 92.9 95.1 94.2 94.4 91.8 91.7 93.7 OpenCLIP-XLM-R-H 97.3 96.1 94.5 94.7 96.0 90.2 93.9 94.0 94.6 InternVL-C (ours) 97.3 95.7 95.1 95.6 96.0 92.2 93.3 95.5 95.1 InternVL-G (ours) 98.6 97.7 96.5 96.7 96.9 95.1 94.8 96.1 96.6
Multimodal Dialogue (click to expand)
-
Zero-Shot Image Captioning [see details]
method COCO Flickr30K NoCaps Emu-I 117.7 - - DreamLLM 115.4 - - InternVL-G (ours) 128.2 79.2 113.7 -
Multimodal Benchmarks with Frozen LLM [see details]
method visual encoder glue layer LLM res. COCO Flickr NoCaps VQAv2 GQA VizWiz TextVQA MME POPE InstructBLIP EVA-g QFormer V-7B 224 – 82.4 123.1 – 49.2 34.5 50.1 – – BLIP-2 EVA-g QFormer V-13B 224 – 71.6 103.9 41.0 41.0 19.6 42.5 1293.8 85.3 InstructBLIP EVA-g QFormer V-13B 224 – 82.8 121.9 – 49.5 33.4 50.7 1212.8 78.9 InternVL-Chat (ours) IViT-6B QLLaMA V-7B 224 141.4 89.7 120.5 72.3 57.7 44.5 42.1 1298.5 85.2 InternVL-Chat (ours) IViT-6B QLLaMA V-13B 224 142.4 89.9 123.1 71.7 59.5 54.0 49.1 1317.2 85.4 -
Multimodal Benchmarks with Trainable LLM [see details]
method vision encoder LLM res. VQAv2 GQA VizWiz SQA TextVQA POPE MME MMB MMBCN MMVet LLaVA-1.5 CLIP-L-336px V-7B 336 78.5 62.0 50.0 66.8 58.2 85.9 1510.7 64.3 58.3 30.5 LLaVA-1.5 CLIP-L-336px V-13B 336 80.0 63.3 53.6 71.6 61.3 85.9 1531.3 67.7 63.6 35.4 InternVL-Chat (ours) IViT-6B-224px V-7B 336 79.3 62.9 52.5 66.2 57.0 86.4 1525.1 64.6 57.6 31.2 InternVL-Chat (ours) IViT-6B-224px V-13B 336 80.2 63.9 54.6 70.1 58.7 87.1 1546.9 66.5 61.9 33.7 InternVL-Chat (ours) IViT-6B-448px V-13B 448 82.0 64.1 60.1 71.6 64.8 87.2 1579.0 68.2 64.0 36.7 -
Tiny LVLM [see details]
Rank Model Version Score 🏅️ InternVL InternVL-Chat 327.61 🥈 InternLM-XComposer-VL InternLM-XComposer-VL-7B 322.51 🥉 Bard Bard 319.59 4 Qwen-VL-Chat Qwen-VL-Chat 316.81 5 LLaVA-1.5 Vicuna-7B 307.17 6 InstructBLIP Vicuna-7B 300.64 7 InternLM-XComposer InternLM-XComposer-7B 288.89 8 BLIP2 FlanT5xl 284.72 9 BLIVA Vicuna-7B 284.17 10 Lynx Vicuna-7B 279.24
See INSTALLATION.md
using InternViT-6B (click to expand)
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
model = AutoModel.from_pretrained(
'OpenGVLab/InternViT-6B-224px',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image = Image.open('./examples/image1.jpg').convert('RGB')
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-224px')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
outputs = model(pixel_values)
using InternVL-C(ontrastive) and InternVL-G(enerative) (click to expand)
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer
model = AutoModel.from_pretrained(
'OpenGVLab/InternVL-14B-224px',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).cuda().eval()
image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternVL-14B-224px')
tokenizer = AutoTokenizer.from_pretrained(
'OpenGVLab/InternVL-14B-224px', use_fast=False, add_eos_token=True)
tokenizer.pad_token_id = 0 # set pad_token_id to 0
images = [
Image.open('./examples/image1.jpg').convert('RGB'),
Image.open('./examples/image2.jpg').convert('RGB'),
Image.open('./examples/image3.jpg').convert('RGB')
]
prefix = 'summarize:'
texts = [
prefix + 'a photo of a red panda', # English
prefix + '一张熊猫的照片', # Chinese
prefix + '二匹の猫の写真' # Japanese
]
pixel_values = image_processor(images=images, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
input_ids = tokenizer(texts, return_tensors='pt', max_length=80,
truncation=True, padding='max_length').input_ids.cuda()
# InternVL-C
logits_per_image, logits_per_text = model(
image=pixel_values, text=input_ids, mode='InternVL-C')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 5.2185e-03, 6.0070e-08],
# [2.2949e-02, 9.7656e-01, 5.9903e-06],
# [3.2932e-06, 7.4863e-05, 1.0000e+00]], device='cuda:0',
# dtype=torch.bfloat16, grad_fn=<SoftmaxBackward0>)
# InternVL-G
logits_per_image, logits_per_text = model(
image=pixel_values, text=input_ids, mode='InternVL-G')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 3.1738e-03, 3.6322e-08],
# [8.6060e-03, 9.9219e-01, 2.8759e-06],
# [1.7583e-06, 3.1233e-05, 1.0000e+00]], device='cuda:0',
# dtype=torch.bfloat16, grad_fn=<SoftmaxBackward0>)
# please set add_eos_token to False for generation
tokenizer.add_eos_token = False
image = Image.open('./examples/image1.jpg').convert('RGB')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
tokenized = tokenizer("English caption:", return_tensors='pt')
pred = model.generate(
pixel_values=pixel_values,
input_ids=tokenized.input_ids.cuda(),
attention_mask=tokenized.attention_mask.cuda(),
num_beams=5,
min_new_tokens=8,
)
caption = tokenizer.decode(pred[0].cpu(), skip_special_tokens=True).strip()
# English caption: a red panda sitting on top of a wooden platform
using InternVL-Chat (click to expand)
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer
path = "OpenGVLab/InternVL-Chat-Chinese-V1-1"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True,
device_map='auto').eval()
tokenizer = AutoTokenizer.from_pretrained(path)
image = Image.open('./examples/image2.jpg').convert('RGB')
image = image.resize((448, 448))
image_processor = CLIPImageProcessor.from_pretrained(path)
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
generation_config = dict(
num_beams=1,
max_new_tokens=512,
do_sample=False,
)
question = "请详细描述图片"
response = model.chat(tokenizer, pixel_values, question, generation_config)
- Release high-resolution models
- Release InternVL-Chat
- Release InternVL-C(ontrastive) and InternVL-G(enerative)
- Release InternViT-6B
This project is released under the MIT license. Parts of this project contain code and models from other sources, which are subject to their respective licenses.
If you find this project useful in your research, please consider cite:
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
InternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!
If you want to join our WeChat group, please scan the following QR Code to add our assistant as a Wechat friend: