Rastogii / Thomson-reuters-News-ETL-pipeline-and-DB-Setup

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Thomson-reuters-News-ETL-pipeline-and-DB-Setup

Set Up:

  1. Spawn an AWS EMR 6.3.1 cluster - Big data stack should have Spark.
  2. Spawn a m5.xlarge / m5.2xlarge Ubuntu18.04 VM on AWS.

Prerequisites Setup on AWS EMR:

  1. ssh to AWS EMR cluster master node using
    'ssh -i xx.pem hadoop@ip-address'
  2. Copy pipeline_prerequisites.sh to this node, as well as the private key to Remote server where the CSVs are located.
  3. Run 'chmod +x pipeline_prerequisites.sh'
  4. Run './pipeline_prerequisites.sh'
  5. Change Driver memory in spark-deafults.conf to 15GB to avoid OOM errors in file '/usr/lib/spark/conf/spark-defaults.conf'.

Prerequisites to Setup ElasticSearch DB and configure table schema on m5.xlarge / m5.2xlarge VM

  1. ssh to VM
  2. Copy 'elastic.sh' to the VM
  3. Run 'chmod +x elastic.sh'
  4. Run './elastic.sh'

elasticsearchSchema_LI

Airflow Scheduling:

  1. 'airflow_dag.py' automates fetching tar files from Remote server, untar them and run pyspark application @daily.
  2. Make sure sqlite3 has version > 3.15.0. AWS EMR default image may have older sqlite version.
  3. Steps to Run:

Copy airflow_dag.py, file.py / completeCSVetlFile.py to AWS EMR cluster at '/home/hadoop/airflow_dag.py' and 'home/hadoop/file.py'.

Set these variables: ip = 'ip-address-remote-server' pvt_key_name = '/location/to/private-key.pem' user = 'username' fileName = '2013-07-01.csv.gz'

Run the file by 'python airflow_dag.py'

Pyspark application: 'file.py' processes partial CSV and 'completeCSVetlFile.py' processes complete CSV. It expects the jar is untar-ed and sitting in the same folder.

To run either:

  1. Copy this file to AWS EMR master node at '/home/hadoop/file.py'.
  2. Setup these varaibles:
    "es.nodes", "x.x.x.x" //public-ip of ES Node
    "es.port" , "9200"
    "es.resource","thomreuters/2013-07-01
    CSV file Name - '2013-07-01.csv' on line 18
  3. Run 'spark-submit --master yarn --deploy-mode client file.py'

This pyspark file does:

  1. It will read csv file and record the index where <“date”,”time”> pattern is matching using regex.
  2. It will divide the csv file into text blobs based on the above indices and convert it into partitioned dataFrame.
  3. The partitioned dataFrame will go through an UDF parser, which will parse each text blob and convert it into a structured format Hash(19 fields+ 1 field for shard routing region).
  4. The partitioned DataFrames are brought back to driver executor where the “headline”, “text” fields are converted to English Language using Spark-NLP.
  5. The resultant dataFrame is saved to Elasticsearch into ThomReuters/ table

This application takes time to run. Keep patience!

PostProcessed Output looks like: image

RESTAPI Search POST Request: post request

About


Languages

Language:Python 85.3%Language:Shell 14.7%