SunnyHaze / IMDLBenCo

A comprehensive benchmark & codebase for Image manipulation detection/localization.

Home Page:https://scu-zjz.github.io/IMDLBenCo-doc

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OSQ

IMDL-BenCo: Comprehensive Benchmark and Codebase for Image Manipulation Detection & Localization

Xiaochen Ma†, Xuekang Zhu†, Lei Su†, Bo Du†, Zhuohang Jiang†, Bingkui Tong†, Zeyu Lei†, Xinyu Yang†, Chi-Man Pun, Jiancheng Lv, Jizhe Zhou*

†: joint first author & equal contribution *: corresponding author
🏎️Special thanks to Dr. Wentao Feng for the workplace, computation power, and physical infrastructure support.

Powered by Arxiv Documents last commit GitHub

Overview

☑️Welcome to IMDL-BenCo, the first comprehensive IMDL benchmark and modular codebase.

  • This codebase is under long-term maintenance and updating. New features, extra baseline/sota models, and bug fixes will be continuously involved. You can find the corresponding plan here shortly.
  • This repo decomposes the IMDL framework into standardized, reusable components and revises the model construction pipeline, improving coding efficiency and customization flexibility.
  • This repo fully implements or incorporates training code for state-of-the-art models to establish a comprehensive IMDL benchmark.
  • Cite and star if you feel helpful. This will encourage us a lot 🥰.

☑️About the Developers:

Important! The current documentation and tutorials are not complete. This is a project that requires a lot of manpower, and we will do our best to complete it as quickly as possible. Currently, you can use the demo following the brief tutorial below.

Features under developing

This repository has completed training, testing, robustness testing, Grad-CAM, and other functionalities for mainstream models.

However, more features are currently in testing for improved user experience. Updates will be rolled out frequently. Stay tuned!

  • Install and download via PyPI

  • Based on command line invocation, similar to conda in Anaconda.

    • Dynamically create all training scripts to support personalized modifications.
  • Information library, downloading, and re-management of IMDL datasets.

  • Support for Weight & Bias visualization.

Quick start

Prepare environments

Currently, you can create a PyTorch environment and run the following command to try our repo.

git clone https://github.com/scu-zjz/IMDLBenCo.git
cd IMDLBenCo
pip install -r requirements.txt

Prepare IML Datasets

  • We defined three types of Dataset class
    • JsonDataset, which gets input image and corresponding ground truth from a JSON file with a protocol like this:
      [
          [
            "/Dataset/CASIAv2/Tp/Tp_D_NRN_S_N_arc00013_sec00045_11700.jpg",
            "/Dataset/CASIAv2/Gt/Tp_D_NRN_S_N_arc00013_sec00045_11700_gt.png"
          ],
          ......
          [
            "/Dataset/CASIAv2/Au/Au_nat_30198.jpg",
            "Negative"
          ],
          ......
      ]
      
      where "Negative" represents a totally black ground truth that doesn't need a path (all authentic)
    • ManiDataset which loads images and ground truth pairs automatically from a directory having sub-directories named Tp (for input images) and Gt (for ground truths). This class will generate the pairs using the sorted os.listdir() function. You can take this folder as an example.
    • BalancedDataset is a class used to manage large datasets according to the training method of CAT-Net. It reads an input file as ./runs/balanced_dataset.json, which contains types of datasets and corresponding paths. Then, for each epoch, it randomly samples over 1800 images from each dataset, achieving uniform sampling among datasets with various sizes.

Training

Prepare pre-trained weights (if needed)

Some models like TruFor may need pre-trained weights. Thus you need to download them in advance. You can check the guidance to download the weights in each folder under the ./IMDLBenCo/model_zoo for the model. For example, the guidance for TruFor is under IMDLBenCo\model_zoo\trufor\README.md

Run shell script

You can achieve customized training by modifying the dataset path and various parameters. For specific meanings of these parameters, please use python ./IMDLBenco/training_scripts/train.py -h to check.

By default, all provided scrips are called as follows:

sh ./runs/demo_train_iml_vit.sh

Visualize the loss & metrics & figures

Now, you can call a Tensorboard to visualize the training results by a browser.

tensorboard --logdir ./

Customize your own model

Our design paradigm aims for the majority of customization for new models (including specific models and their respective losses) to occur within the model_zoo. Therefore, we have adopted a special design paradigm to interface with other modules. It includes the following features:

  • Loss functions are defined in __init__ and computed within forward().
  • The parameter list of forward() must consist of fixed keys to correspond to the input of required information such as image, mask, and so forth. Additional types of information can be generated via post_func and their respective fields, accepted through corresponding parameters with the same names in forward().
  • The return value of the forward() function is a well-organized dictionary containing the following information as an example:
  # -----------------------------------------
  output_dict = {
      # loss for backward
      "backward_loss": combined_loss,
      # predicted mask, will calculate for metrics automatically
      "pred_mask": mask_pred,
      # predicted binaray label, will calculate for metrics automatically
      "pred_label": None,

      # ----values below is for visualization----
      # automatically visualize with the key-value pairs
      "visual_loss": {
        # customized float for visualize, the key will shown as the figure name. Any number of keys and any str can be added as key.
          "predict_loss": predict_loss,
          "edge_loss": edge_loss,
          "combined_loss": combined_loss
      },

      "visual_image": {
        # customized tensor for visualize, the key will shown as the figure name. Any number of keys and any str can be added as key.
          "pred_mask": mask_pred,
          "edge_mask": edge_mask
  }
      # -----------------------------------------

Following this format, it is convenient for the framework to backpropagate the corresponding loss, compute final metrics using masks, and visualize any other scalars and tensors to observe the training process.

Citation

If you find our work valuable and it has contributed to your research or projects, we kindly request that you cite our paper. Your recognition is a driving force for our continuous improvement and innovation🤗.

@misc{ma2024imdlbenco,
    title={IMDL-BenCo: A Comprehensive Benchmark and Codebase for Image Manipulation Detection & Localization},
    author={Xiaochen Ma and Xuekang Zhu and Lei Su and Bo Du and Zhuohang Jiang and Bingkui Tong and Zeyu Lei and Xinyu Yang and Chi-Man Pun and Jiancheng Lv and Jizhe Zhou},
    year={2024},
    eprint={2406.10580},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Flag Counter

About

A comprehensive benchmark & codebase for Image manipulation detection/localization.

https://scu-zjz.github.io/IMDLBenCo-doc

License:Creative Commons Attribution 4.0 International


Languages

Language:Python 96.6%Language:Shell 3.4%