hailey0huong / LowSource-MultiClass-Classification

Apply pretrained language models with fine tuning to classify multi-class text in low resource settings

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Low Resource Multi-Class Text Classification

Multi-class sentiment classification is a popular problem in the real world with many valuable applications. However, ones usually face the challenge of limited resources to train data for these tasks, while the traditional approaches require task- specific modifications and training from scratch. As such, there are much research conducted last year starting to investigate transfer learning in Natural Language Processing (NLP). Recent work has shown that using a language model, which was pre-trained on a large general text corpus, and fine-tuning it with the target text data achieved the state-of-the-art results in many NLP tasks. In this work, I experimented this idea in the low resource settings, where getting labeled data for training is expensive, with classes imbalance to mimic the real world environment. This work compares the performance between 2 models: a Bidirectional LSTM model with GloVe embedding trained from scratch and a pre-trained language model BERT with finetuned classifier. Results show that the BERT + Finetuning model generally achieves better accuracy (lower error rate) even with a very small training sample set. The difference is more significant when the sample size increases.

The full report can be found here.

About

Apply pretrained language models with fine tuning to classify multi-class text in low resource settings


Languages

Language:Jupyter Notebook 98.2%Language:Python 1.8%