mojtaba-nafez / invariant-anomaly-detection

Implementation of robust deep anomaly detection models using Partial Conditional Invariant Regularization (PCIR). Accepted to NeurIPS 2023.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Invariant Anomaly Detection under Distribution Shifts: A Causal Perspective

Getting Started

  1. Navigate to submodules/anomalib and install with pip install -e .
  2. Navigate to submodules/diagvib-6 and install with pip install -e .

Training and Testing

  1. Config files for datasets and models are under configs/.

  2. Hyperparameters for experiments are already specified in the config files.

  3. To train and test models, use the command:

    python train.py --model path_to_model_config --dataset path_to_dataset_config
    
  4. For testing only, append --weight path_to_weight_file to the command above.

  5. Note: The train.py script includes testing in its pipeline after training is finished.

Config File Details

Config files are categorized into:

  • configs/datasets: Describe the experiment settings.
  • configs/models: Describe the model architecture and hyperparameters.
  • configs/eval: Contains the evaluation config.
  • configs/hyperparameter: Contains hyperparameter search configs.

Running Experiments

For example, to run experiments with stfpm on Camelyon17:

python train.py --model configs/models/stfpm.yaml --dataset configs/datasets/wilds.yaml

For the same experiment with MMD regularization, use:

python train.py --model configs/models/stfpm_mmd.yaml --dataset configs/datasets/wilds.yaml

Similar commands can be used for other experiments.

Special Experiments

  • For Camelyon17, specify the environments for training, validation, and testing in the dataset config.
  • For DiagVib experiments, extra datasets generated by the DiagVib code in submodules are required.

Evaluations

  • Evaluation configs are stored in configs/evaluation.

  • To evaluate models, use:

    python evaluation.py --config configs/evaluation/eval.yaml
    
  • Note: There is no need to evaluate models on Camelyon17 separately as it is included in the training pipeline.

Hyperparameter Search

  • Configs for hyperparameter search are stored in configs/hyperparameter.

  • Hyperparameter search is done via grid search.

  • To start hyperparameter search or ablation study, use:

    python hyperparameter_search.py --model path_to_model_config --dataset path_to_dataset_config --hyperparameter path_to_hyperparameter_config
    

Manual Environment Generation for DiagVib

If you want to generate the DiagVib environments manually, use:

python generate_diagvib_envs.py --root path_to_environment_config_folder

Code Explanation

Various scripts and classes were written to handle datasets, train and evaluate models, and generate Out-Of-Distribution (OOD) configs. Detailed explanations can be found inside the scripts.

Tools

The tools/generate_ood_config.py script generates configs for all possible OOD environments for each feature, based on a base environment config.

Evaluation

The evaluation.py script evaluates an anomaly detection model with an evaluation config specified in configs/evaluation/eval.yaml.

Please refer to the respective script or class for more detailed information.

About

Implementation of robust deep anomaly detection models using Partial Conditional Invariant Regularization (PCIR). Accepted to NeurIPS 2023.

License:MIT License


Languages

Language:Jupyter Notebook 58.3%Language:Python 41.7%Language:Makefile 0.0%