We implemented our method in Transformer-explainability_combined.ipynb. We also added the several functions containing get_relprop() and compute_layer_rollout_attention() to baselines/ViT/ViT_LRP.py. Other than that the code is from https://github.com/hila-chefer/Transformer-Explainability.