This repositry contains our code, configurations, and model for our work on "A Unified Span-Based Approach for Opinion Mining with Syntactic Constituents", which is published on NAACL-2021. The src directory contains our code and the exp-4.1-baseline contains our experiment for "Baseline+BERT" (data0, the first data of the five fold cross-validation).
Python3, Pytorch, Transformers 2.1.1 (for BERT)
MPQA2.0 url PTB and OntoNotes can be download from LDC.
Please reset and check the files in the train.sh and config.json when you want to run the code.
sh train.sh GPU\_ID
To test the performance of the trained model, you should run the following script.
sh predict.sh GPU\_ID
We release the sample model of the "exp-4.1-baseline" on the Google Drive, url. Important, use the offline evaluation script to eval the output file.