kjerk / instructblip-pipeline

A multimodal inference pipeline that integrates InstructBLIP with textgen-webui for Vicuna and related models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

InstructBLIP Pipeline

πŸ“ Overview


This is a pipeline providing InstructBLIP multimodal operation for Vicuna family models running on oobabooga/text-generation-webui.


⏩ Just let me run the thing

Clone this repo into your extensions/multimodal/pipelines folder and run the server with --multimodal enabled and a preferred pipeline. Use AutoGPTQ to load.

> cd text-generation-webui
> cd extensions/multimodal/pipelines
> git clone https://github.com/kjerk/instructblip-pipeline
> cd ../../../
> python server.py --auto-devices --chat --listen --loader autogptq --multimodal-pipeline instructblip-7b

πŸ‘€ Examples

|
Generation Parameter Presets:
  • LLaMA-Precise

  • Big O

πŸ’Έ Requirements

  • AutoGPTQ loader (ExLlama is not supported for multimodal)

  • No additional dependencies from textgen-webui

VRAM Requirements
instructblip-7b + vicuna-7b

~6GB VRAM

instructblip-13b + vicuna-13b

11GB VRAM

The vanilla Vicuna-7b + InstructBLIP just barely runs on a 24GB gpu using huggingface transformers directly, and the 13b at fp16 is too much, thanks to optimization efforts and Quantized models/AutoGPTQ, on textgen-webui with AutoGTPQ, InstructBLIP and Vicuna can comfortably run on 8GB to 12gb of VRAM. πŸ™Œ


Provided Pipelines
  • 'instructblip-7b' for Vicuna-7b family

  • 'instructblip-13b' for Vicuna-13b family

Non-Working Models
  • wizard-vicuna-13b-4bit-128g

πŸ–₯️ Inference

Due to the already heavy VRAM requirements of the respective models, the vision encoder and projector are kept on CPU and are relatively quick, while the Qformer is moved to GPU for speed.

πŸ“š References


β˜‘οΈ TODO List

  • βœ… Full readme doc

  • βœ… Add demonstration images

  • ☐ Eat something tasty

πŸ”­ Consider List

  • ❔ Allow for GPU inference of the image encoder and projector?

  • ❔ Consider multiple embeddings causing problems and remediations.

πŸ“„ License

This pipeline echoes through the LAVIS license and is published under the BSD 3-Clause OSS license.


v1?label=discord&message=TheBloke AI&style=for the badge&color=success&logo=discord&logoColor=green&labelColor=black

GitHub 100000?style=for the badge&logo=github&logoColor=white

About

A multimodal inference pipeline that integrates InstructBLIP with textgen-webui for Vicuna and related models.

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 100.0%