All experiments have been evaluated on test set
Model type | F1-score | Done |
---|---|---|
microsoft(med-bert) | 91 | ✔️ |
biobert+charBert | 92.7 | ✔️ |
bertConfig+BertChar | 93.66 | ✔️ |
bertConfig+BertChar+focalLS | 93.45 | ✔️ |
microsoft+focalLs | 91.05 | ✔️ |
microsoft+charBert | 0.9128 | ✔️ |