GU-DataLab / stance-detection-KE-MLM

Official resource of the paper "Knowledge Enhanced Masked Language Model for Stance Detection", NAACL 2021

Home Page:https://www.aclweb.org/anthology/2021.naacl-main.376/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Stance Detection

This repository is for the paper - Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021. πŸš€

Code for log-odds-ratio with Dirichlet prior is at log-odds-ratio repository.

Data Sets

This data sets are for research purposes only - Download πŸ”₯

  • Data format is CSV with only 3 columns: "tweet_id","text","label"
  • Labels = {0:"AGAINST", 1:"FAVOR", 2:"NONE"}

The data set contains 2500 manually-stance-labeled tweets, 1250 for each candidate (Joe Biden and Donald Trump). These tweets were sampled from the unlabeled set that our research team collected English tweets related to the 2020 US Presidential election. Through the Twitter Streaming API, we collected data using election-related hashtags and keywords. Between January 2020 and September 2020, we collected over 5 million tweets, not including quotes and retweets. These unlabeled tweets were used to fine-tune all of our language models. The labeled data that we publicly provide were sampled from this 5M set and were labeled using Amazon Mechanical Turk.

The stance label distributions are shown in the table below. Please refer to our paper for more detail about the data sets.

%SUPPORT %OPPOSE %NEUTRAL
Biden 31.3 39.0 29.8
Trump 27.3 39.9 32.8

Result

On each pre-trained language model, we trained for the downstream stance detection task for five times and report average scores in Table 2.

image

Pre-trained Models

All models are uploaded to my Huggingface πŸ€— so you can load model with just three lines of code!!!

Usage

We tested in pytorch v1.8.1 and transformers v4.5.1.

Please see specific model pages above for more usage detail. Below is a sample use case.

1. Choose and load model for stance detection

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np

# select mode path here
# see more at https://huggingface.co/kornosk
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden-KE-MLM"

# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)

2. Get a prediction (see more in sample_predict.py)

id2label = {
    0: "AGAINST",
    1: "FAVOR",
    2: "NONE"
}

##### Prediction Favor #####
sentence = "Go Go Biden!!!"
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()

print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])

# please consider citing our paper if you feel this is useful :)

Citation

If you feel our paper and resources are useful, please consider citing our work! πŸ™

@inproceedings{kawintiranon2021knowledge,
    title={Knowledge Enhanced Masked Language Model for Stance Detection},
    author={Kawintiranon, Kornraphop and Singh, Lisa},
    booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
    year={2021},
    publisher={Association for Computational Linguistics},
    url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}

Troubleshoot

1. Can't load the model

  • From this issue
  • Check the dependencies pytorch==1.8.1 and transformers==4.5.1
  • Try removing tensorflow

About

Official resource of the paper "Knowledge Enhanced Masked Language Model for Stance Detection", NAACL 2021

https://www.aclweb.org/anthology/2021.naacl-main.376/

License:GNU General Public License v3.0


Languages

Language:Python 100.0%