shreyashkulkarni / Data-pre-processing-pipeline

Fetching data from API/any DB and then pushing the data into the RabbitMQ input queue. From the RabbitMq input queue consume it and perform some aggregations according to need how the data is needed in the worker script. From the worker script push the data to the storage queue. Storage script accepts the data from the storage queue and send its to the ElasticSearch DB installed on a Docker container.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Data-pre-processing-pipeline

Fetching data from API/any DB and then pushing the data into the RabbitMQ input queue. From the RabbitMq input queue consume it and perform some aggregations according to need how the data is needed in the worker script. From the worker script push the data to the storage queue. Storage script accepts the data from the storage queue and send its to the ElasticSearch DB installed on a Docker container.

About

Fetching data from API/any DB and then pushing the data into the RabbitMQ input queue. From the RabbitMq input queue consume it and perform some aggregations according to need how the data is needed in the worker script. From the worker script push the data to the storage queue. Storage script accepts the data from the storage queue and send its to the ElasticSearch DB installed on a Docker container.