DFKI-NLP / OLM

Explanation Method for Neural Models in NLP

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OLM: Occlusion with Language Modeling

olm-table

Table of contents

Introduction

Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language and explanation methods are crucial to understand their decisions. OLM is a novel explanation method that combines Occlusion and Language Models to sample valid and syntactically correct replacements with high likelihood, given the context of the original input.

This is the repository for the paper Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling.

πŸ”­  Overview

Path Description
experiments/ This directory contains the scripts that run the experiments for the raw results in the paper.
notebooks/ This directory contains the notebooks that we used to calculate and visualize the results in the paper.

βœ…  Requirements

The code is tested with:

  • Python 3.7

πŸš€  Installation

From source

git clone https://github.com/DFKI-NLP/olm
cd olm
pip install -r requirements.txt

πŸ”¬  Experiments

Datasets

The datasets are part of GLUE and can be downloaded by following the links below. First, download the datasets and unpack into data/glue_tasks/<TASK>/.

Dataset Download
SST-2 [Link]
MNLI [Link]
CoLA [Link]

Fine-tune Model

SST-2

cd olm_tasks
./finetune_sst2.sh

CoLA

cd olm_tasks
./finetune_cola.sh

Compute Relevances

PYTHONPATH="./" python experiments/<TASK>.py \
    --data_dir ./data/glue_data/<TASK>/ \
    --model_name_or_path <MODEL NAME OR PATH> \
    --output_dir <OUTPUT DIR> \
    --strategy grad \ # either one of grad, gradxinput, saliency, integratedgrad, unk, resampling, resampling_std, delete
    --do_run \
    --do_relevances \
    --cuda_device 0

Visualize Results

notebooks/relevance-mnli.ipynb contains the notebook to visualize occlusion results.

πŸ“š  Citation

If you find the code or dataset patch helpful, please cite the following paper:

@inproceedings{harbecke-alt-2020-olm,
    title={Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling},
    author={David Harbecke and Christoph Alt},
    year={2020},
    booktitle={Proceedings of ACL 2020, Student Research Workshop},
    url={https://arxiv.org/abs/2004.09890}
}

πŸ“˜  License

OLM is released under the under terms of the MIT License.

About

Explanation Method for Neural Models in NLP

License:MIT License


Languages

Language:Jupyter Notebook 80.3%Language:Python 19.0%Language:Jsonnet 0.5%Language:Shell 0.2%