HanXudong / Fair_Enough

Source codes for EACL 2023 paper "Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FairEnough

Source codes for EACL 2023 paper "Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP"

If you use the code, please cite the following paper:

@article{han2023fair,
  title={Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP},
  author={Han, Xudong and Baldwin, Timothy and Cohn, Trevor},
  journal={arXiv preprint arXiv:2302.05711},
  year={2023}
}

Code

We conduct evaluations based on existing sources that are publicly available online. Please see the online source https://github.com/HanXudong/fairlib for more details.

  • 0_evaluation\aggregation.ipynb

    This notebook implements the aggregation methods described in Section 3 of the paper.

  • 0_evaluation\raw_data.ipynb

    This notebook reproduces Figures 1 and 7 in the paper.

  • 0_evaluation\confusion_matrices.ipynb

    This notebook reproduces Figure 7 of the paper.

  • 1_selection\model_selection.ipynb

    This notebook includes different model selection methods descibed in Section 4.3 of the paper. Besides, it also reproduces the Figure 3.

  • 2_comparison\model_comparison.ipynb

    This notebook reproduces Figures 4 and 8, and Tables 3 and 5.

About

Source codes for EACL 2023 paper "Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP"


Languages

Language:Jupyter Notebook 100.0%