ngocbh / MAGE

Official implementation of "Explaining Graph Neural Networks via Structure-aware Interaction Index" (ICML'24)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Official implementation of "Explaining Graph Neural Networks via Structure-aware Interaction Index" (ICML'24)


teaser

Details of algorithms and experimental results can be found in our paper:

@article{bui2024explaining,
  title={Explaining Graph Neural Networks via Structure-aware Interaction Index},
  author={Bui, Ngoc and Nguyen, Hieu Trung and Nguyen, Viet Anh and Ying, Rex},
  booktitle={International Conference on Machine Learning},
  organization={PMLR},
  year={2024}
}

Please consider citing this paper if it is helpful for you.

Installation

Requirements:

python 3.8.17
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
pip install torch-scatter==2.0.9 torch-sparse==0.6.13 torch-cluster==1.6.0 torch-spline-conv==1.2.1 -f https://data.pyg.org/whl/torch-1.11.0+cu113.html
CUDA version 11.3
# torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.11.1+${CUDA}.html
pip install -r requirements.txt

Mosek Requirement: request a trial for mosek license. Put the license in ~/mosek/mosek.lic

Train GNNs

To train a gnn model on a new dataset

python train_gnns.py models=<model_name> datasets=<dataset_name>

We also provide checkpoints for models we used in checkpoints directory of this repo. We highly recommend using these checkpoints to reproduce the results of our paper. Some of the checkpoints are downloaded directly from SubgraphX repo

Availability

Datasets

  • For single motif experiment:

    • ba_2motifs
    • ba_house_grid
    • spmotif
    • mnist75sp
  • For multiple motifs experiment:

    • ba_house_and_grid
    • ba_house_or_grid
    • mutag0
    • benzene
  • for text experiment:

    • graph_sst5
    • twitter

Text Experiment note:

  • Download text dataset from haiyang's google drive.
  • Unzip datasets into /datasets folder
  • Change name of folder and files inside the raw folder. Graph-SST2 -> graph_sst2 ; Graph-Twitter -> twitter
  • Run with experiments=single_motif

Methods

We compare our method (MAGE mage) with the following baselines: same, gstarx, subgraphx, match_explainer, pgexplainer, gnn_explainer, grad_cam.

We tested with three following models: gcn, gin, and gat (the current implementation of GAT does not support grad_cam, pgexplainer, gnn_explainer).

Usage

To reproduce the results on single motif datasets (ba_2motifs, ba_house_grid, spmotif, mnist75sp)

python run.py explainers=<explainer_names> models=<model_name> datasets=<dataset_name> experiments=single_motif

To reproduce the results on multi motif datasets (ba_house_and_grid, ba_house_or_grid, mutag0, benzene)

python run.py explainers=<explainer_names> models=<model_name> datasets=<dataset_name> experiments=multi_motifs

To reproduce the results on sentiment classification datasets (graphsst2, twitter)

python run.py explainers=<explainer_names> models=<model_name> datasets=<dataset_name> experiments=single_motif

other params that could be included:

    experiments=<experiment_name> # single_motif, multi_motifs
    device_id=0 # GPU id
    rerun=True # Rerun the experiment or not
    max_ins=<number> # maximum number of evaluated instances

Example:

python3 run.py explainers=mage models=gcn datasets=ba_2motifs experiments=single_motif rerun=True run_id=5 random_seed=1

Evaluation Metrics

After running the experiments, results are saved in the json file /results/[runid]/[dataset]/[model]/[experiment]/results.json.

The performance metrics reported in the paper correspond to the following keys in results.json:

  • F1: node_f1_score
  • AUC: auc
  • AMI: ami_score
  • Fid_alpha: fid_delta
  • Fid: org_fid_delta

Acknowledgement

A substantial portion of the source code has been borrowed from the following repositories:

Contact

If you have any problems, please open an issue in this repository or send an email to ngoc.bui@yale.edu.

About

Official implementation of "Explaining Graph Neural Networks via Structure-aware Interaction Index" (ICML'24)


Languages

Language:Python 100.0%