There are 2 repositories under blip2 topic.
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Chat with NeRF enables users to interact with a NeRF model by typing in natural language.
Automate Fashion Image Captioning using BLIP-2. Automatic generating descriptions of clothes on shopping websites, which can help customers without fashion knowledge to better understand the features (attributes, style, functionality etc.) of the items and increase online sales by enticing more customers.
A true multimodal LLaMA derivative -- on Discord!
Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"
[ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives
The Multimodal Model for Vietnamese Visual Question Answering (ViVQA)
Modifying LAVIS' BLIP2 Q-former with models pretrained on Japanese datasets.
This repository is for profiling, extracting, visualizing and reusing generative AI weights to hopefully build more accurate AI models and audit/scan weights at rest to identify knowledge domains for risk(s).
Annotations on a Budget: Leveraging Geo-Data Similarity to Balance Model Performance and Annotation Cost
Caption images across your datasets with state of the art models from Hugging Face and Replicate!
Finetuning Large Visual Models on Visual Question Answering
Uses AI to scare people...more.
caption generator using lavis and argostranslate
Too lazy to organize my desktop, make gpt + BLIP-2 do it
Creating stylish social media captions for an Image using Multi Modal Models and Reinforcement Learning
An end to end Deep Learning based tool for image caption generation.
In this we explore into visual Question Answering Using Gemini LLM and image was in URL or any other extension