This readme contains the steps for collecting, filtering, and processing e-cigarettes related posts from Twitter and Reddit. The pipelines are based on Falconet.
- Create and activate virtual environment
pip install -r requirements.txt
- Copy
python/falconet/settings/local.py.template
topython/falconet/settings/local.py
and fill in variables. - Copy
pipelines/ecigs/config.py.template
topipelines/ecigs/config.py
and fill in variables. - Follow the instructions below and run the pipelines.
We provided detailed instructions to process the data in the instructions to process the data are in pipelines/ecigs
.
We provide an overview of our system and pipelines here.
We use seperate pipelines for Twitter and Reddit.
- Twitter pipeline involves keywords filtering, relevance classifier, and geolocation inference, which is covered in
run_twitter_pipeline.py
. - Reddit pipeline involves subreddit filtering, keyword filtering, and geolocation inference.
We provide details of running the Twitter pipeline and Reddit pipeline.
The processed data will be stored in the path specified in the configuration file. The details of how the outputs are organized are in Twitter processed data directory and Reddit processed data directory.
For each processed message, we keep the full json record with an additional annotations
field which contains annotated information through the pipelines.
To keep track of the annotations, the pipeline also output csv files with raw counts of posts and predefined aggregations.
We provide the instructions for extracting information about the data after applying our Twitter and Reddit pipelines.
We try to be flexible with the pipeline configuration.
Follow the instructions here to either modify pipelines/ecigs/*_pipeline.json
or
create your own pipeline by creating a subdirectory under pipelines/
.
For Twitter, we provide details on keyword lists, tobacco relevance classifiers and how to train the classifiers.
For Reddit, we provide details on keyword lists, relevant Subreddits and Reddit raw data retrieval.
Mark Dredze (mdredze at cs.jhu.edu)