🌐 Project Page • 🤗 Online Demo • 📃 Paper • 🤖 Model • 📹 Video
Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Ming Tang, Jinqiao Wang
1. Introduction: [Back to Top]
AnomalyGPT is the first Large Vision-Language Model (LVLM) based Industrial Anomaly Detection (IAD) method that can detect anomalies in industrial images without the need for manually specified thresholds. Existing IAD methods can only provide anomaly scores and need manually threshold setting, while existing LVLMs cannot detect anomalies in the image. AnomalyGPT can not only indicate the presence and location of anomaly but also provide information about the image.
We leverage a pre-trained image encoder and a Large Language Model (LLM) to align IAD images and their corresponding textual descriptions via simulated anomaly data. We employ a lightweight, visual-textual feature-matching-based image decoder to obtain localization result, and design a prompt learner to provide fine-grained semantic to LLM and fine-tune the LVLM using prompt embeddings. Our method can also detect anomalies for previously unseen items with few normal sample provided.
2. Running AnomalyGPT Demo [Back to Top]
Clone the repository locally:
git clone https://github.com/CASIA-IVA-Lab/AnomalyGPT.git
Install the required packages:
pip install -r requirements.txt
You can download the pre-trained ImageBind model using this link. After downloading, put the downloaded file (imagebind_huge.pth) in [./pretrained_ckpt/imagebind_ckpt/] directory.
To prepare the pre-trained Vicuna model, please follow the instructions provided [here].
We use the pre-trained parameters from PandaGPT to initialize our model. You can get the weights of PandaGPT trained with different strategies in the table below. In our experiments and online demo, we use the Vicuna-7B and openllmplayground/pandagpt_7b_max_len_1024
due to the limitation of computation resource. Better results are expected if switching to Vicuna-13B.
Base Language Model | Maximum Sequence Length | Huggingface Delta Weights Address |
---|---|---|
Vicuna-7B (version 0) | 512 | openllmplayground/pandagpt_7b_max_len_512 |
Vicuna-7B (version 0) | 1024 | openllmplayground/pandagpt_7b_max_len_1024 |
Vicuna-13B (version 0) | 256 | openllmplayground/pandagpt_13b_max_len_256 |
Vicuna-13B (version 0) | 400 | openllmplayground/pandagpt_13b_max_len_400 |
Please put the downloaded 7B/13B delta weights file (pytorch_model.pt) in the ./pretrained_ckpt/pandagpt_ckpt/7b/ or ./pretrained_ckpt/pandagpt_ckpt/13b/ directory.
After that, you can download AnomalyGPT weights from the table below.
Setup and Datasets | Weights Address |
---|---|
Unsupervised on MVTec-AD | AnomalyGPT/train_mvtec |
Unsupervised on VisA | AnomalyGPT/train_visa |
Supervised on MVTec-AD, VisA, MVTec-LOCO-AD and CrackForest | AnomalyGPT/train_supervised |
After downloading, put the AnomalyGPT weights in the ./code/ckpt/ directory.
In our online demo, we use the supervised setting as our default model to attain an enhanced user experience. You can also try other weights locally.
Upon completion of previous steps, you can run the demo locally as
cd ./code/
python web_demo.py
3. Train Your Own AnomalyGPT [Back to Top]
Prerequisites: Before training the model, making sure the environment is properly installed and the checkpoints of ImageBind, Vicuna and PandaGPT are downloaded.
You can download MVTec-AD dataset from [this link] and VisA from [this link]. You can also download pre-training data of PandaGPT from [here]. After downloading, put the data in the [./data] directory.
The directory of [./data] should look like:
data
|---pandagpt4_visual_instruction_data.json
|---images
|-----|-- ...
|---mvtec_anomaly_detection
|-----|-- bottle
|-----|-----|----- ground_truth
|-----|-----|----- test
|-----|-----|----- train
|-----|-- capsule
|-----|-- ...
|----VisA
|-----|-- split_csv
|-----|-----|--- 1cls.csv
|-----|-----|--- ...
|-----|-- candle
|-----|-----|--- Data
|-----|-----|-----|----- Images
|-----|-----|-----|--------|------ Anomaly
|-----|-----|-----|--------|------ Normal
|-----|-----|-----|----- Masks
|-----|-----|-----|--------|------ Anomaly
|-----|-----|--- image_anno.csv
|-----|-- capsules
|-----|-----|----- ...
The table below show the training hyperparameters used in our experiments. The hyperparameters are selected based on the constrain of our computational resources, i.e. 2 x RTX3090 GPUs.
Base Language Model | Epoch Number | Batch Size | Learning Rate | Maximum Length |
---|---|---|---|---|
Vicuna-7B | 50 | 16 | 1e-3 | 1024 |
To train AnomalyGPT on MVTec-AD dataset, please run the following commands:
cd ./code
bash ./scripts/train_mvtec.sh
The key arguments of the training script are as follows:
--data_path
: The data path for the json filepandagpt4_visual_instruction_data.json
.--image_root_path
: The root path for training images of PandaGPT.--imagebind_ckpt_path
: The path of ImageBind checkpoint.--vicuna_ckpt_path
: The directory that saves the pre-trained Vicuna checkpoints.--max_tgt_len
: The maximum sequence length of training instances.--save_path
: The directory which saves the trained delta weights. This directory will be automatically created.--log_path
: The directory which saves the log. This directory will be automatically created.
Note that the epoch number can be set in the epochs
argument at ./code/config/openllama_peft.yaml file and the learning rate can be set in ./code/dsconfig/openllama_peft_stage_1.json
AnomalyGPT is licensed under the CC BY-NC-SA 4.0 license.
If you found AnomalyGPT useful in your research or applications, please kindly cite using the following BibTeX:
@article{gu2023anomalyagpt,
title={AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language Models},
author={Gu, Zhaopeng and Zhu, Bingke and Zhu, Guibo and Chen, Yingying and Tang, Ming and Wang, Jinqiao},
journal={arXiv preprint arXiv:2308.15366},
year={2023}
}
We borrow some codes and the pre-trained weights from PandaGPT. Thanks for their wonderful work!