git clone https://github.com/to-schi/tweet-analyzer-pipeline.git
Get your API-credentials from developer.twitter.com.
nano ./tweet-analyzer-pipeline/tweet_collector/src/twitter_cred.py
# Enter your credentials and save file with ctrl-x
# Limit is set to 200 tweets
nano ./tweet-analyzer-pipeline/tweet_collector/src/tweet_collector.py
# Insert query-text in line 9: "query = '#SOME_PHRASE'" and save file with ctrl-x
nano ./tweet-analyzer-pipeline/tweet_slack/src/conf.py
# Insert webhook-url and save file with ctrl-x
cd tweet-analyzer-pipeline
sudo docker-compose up
# Add '-d' for background-mode
Docker will build 5 containers and start the pipeline automatically: tweet_mongodb, tweet_collector, tweet_postgres, tweet_etl, and tweet_slack.
This project was created during the Spiced Academy Data Science Bootcamp Nov/2021.