KranthiGV / bias-evaluation

We study the general trend in bias reduction as newer pre-trained models are released. Three recent models ( ELECTRA, DeBERTa and DistilBERT) are chosen and evaluated against two bias benchmarks, StereoSet and CrowS-Pairs. They are compared to the baseline of BERT using the associated metrics

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dataset Setup

  1. cd dataset & wget https://github.com/moinnadeem/StereoSet/blob/master/data/dev.json

Environment Setup

  1. conda env create -f environment-(cpu/gpu).yml
  2. conda activate bias-(cpu/gpu)

Saving Changes

  • conda env export -n bias-(cpu/gpu).yml -f environment-(cpu/gpu).yml --no-builds

Issues Observed

About

We study the general trend in bias reduction as newer pre-trained models are released. Three recent models ( ELECTRA, DeBERTa and DistilBERT) are chosen and evaluated against two bias benchmarks, StereoSet and CrowS-Pairs. They are compared to the baseline of BERT using the associated metrics

License:Apache License 2.0


Languages

Language:Python 82.6%Language:Jupyter Notebook 9.1%Language:Shell 8.3%