The dataset and models in this package are obtained using co-training as described in , AAAI 2019.
Please cite the AAAI-19 paper: Gao et al., Predicting and Analyzing Language Specificity in Social Media Posts
@InProceedings{gao2019specificity,
author = {Gao, Yifan and Zhong, Yang and Preo\c{t}iuc-Pietro, Daniel and Li, Junyi Jessy},
title = {Predicting and Analyzing Language Specificity in Social Media Posts},
booktitle = {Proceedings of AAAI},
year = {2019},
}
SpecificityTwitter is implemented using Python 3.6+. It depends on the following packages:
- numpy
- pandas
- pickle
- scikit-learn
- emoji
- GATE Twitter part-of-speech tagger, please download the twitie-tagger and unzip it in current directory. You can also directly download the tagger we used here
Our model was trained on a support vector regression model intergraded with scikit-learn. The last three packages together with the StanfordCoreNLP toolkit are required to generate features to be used in prediction.
Word lexicons for the models are available for download here. Please note that these resources come with license(s). Decompress the folder under the model directory.
There are several files in the resource folder.
-
Brown clusters (Turian et al., 2010)
browncluster.txt
-
Concrete level (Brysbaert wt al., 2014)
concrete.csv
-
GloVe Word Embedding trained on twitter posts (Pennington et al., 2014)
glove.twitter.27B.100d.txt
-
Sentiment words from (Hu and Liu, 2004)
negatie-words.txt
positive-words.txt
-
Stanford NER tagger (Finkel et al., 2005)
stanford-ner.jar
english.muc.7class.distsim.crf.ser.gz
Call:
$ python specificity.py --inputfile inputfile --outputfile predfile
<inputfile>
should consists of word-tokenized sentences, one sentence per line;<predfile>
will be the destination file which SpecicifityTwitter will write the specificity scores to, one score per line in the same order as sentences in<inputfile>
.
The scores are decimal numbers ranging from 1 to 5, with 1.0 being most general and 5.0 being most specific.
-
Sentences must be word-tokenized before fed into this model.
-
Note that the word embedding file is a 1.2 GB file and should be downloaded from the above link. Each run of specificity.py will load the file to generate features. Thus it is best to avoid loading it multiple times, or modify feature.py and tailor it for your data loading purpose.