There are 1 repository under blip2 topic.
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Chat with NeRF enables users to interact with a NeRF model by typing in natural language.
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks, including end-to-end large-scale multi-modal pretrain models and diffusion model toolbox. Equipped with high performance and flexibility.
A true multimodal LLaMA derivative -- on Discord!
Automate Fashion Image Captioning using BLIP-2. Automatic generating descriptions of clothes on shopping websites, which can help customers without fashion knowledge to better understand the features (attributes, style, functionality etc.) of the items and increase online sales by enticing more customers.
Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"
This repository is for profiling, extracting, visualizing and reusing generative AI weights to hopefully build more accurate AI models and audit/scan weights at rest to identify knowledge domains for risk(s).
Annotations on a Budget: Leveraging Geo-Data Similarity to Balance Model Performance and Annotation Cost
Modifying LAVIS' BLIP2 Q-former with models pretrained on Japanese datasets.
Caption images across your datasets with state of the art models from Hugging Face and Replicate!
Uses AI to scare people...more.
caption generator using lavis and argostranslate
Too lazy to organize my desktop, make gpt + BLIP-2 do it
Creating stylish social media captions for an Image using Multi Modal Models and Reinforcement Learning
An end to end Deep Learning based tool for image caption generation.