This is the work done for OffenseEval 2020 from BRUMS team.
Following table will show the best results from the each model type.
Type | Model | Accuracy | Weighted F1 | Macro F1 | Weighted Recall | Weighted Precision | (tn, fp, fn, tp) |
---|---|---|---|---|---|---|---|
RNN | BiGRU- Fasttext | 0.7828 | 0.7854 | 0.7634 | 0.7828 | 0.7901 | 1416 238 337 657 |
CNN | CNN - FastText
|
0.7681 | 0.7722 | 0.7510 | 0.7681 | 0.7821 | 1364 225 389 670 |
Transformers | Roberta - base (1) | 0.7893 | 0.7883 | 0.7624 | 0.7893 | 0.7876 | 1490 295 263 600 |