(1) Move historical calculations from redis to server [1]
johnantonn opened this issue · comments
The historical calculations operated by Lua script 2 in Redis need to be moved in the server-side. This script has a high performance impact and the writing of events to cockroach need not wait for the historical averages to be computed. A better idea is to store this aggregated info in Mongo, using an asynchronous thread in the router.
I attempted an implementation of the historical/aggregates calculations on the server side, i.e. happening before sending the event further down the pipeline (kafka) and storing the information to Mongo DB. Unfortunately, I realised that this is both a bad design in the current architecture, as well as infeasible in terms of execution time. The overhead is prohibitive.
The current architecture facilitates the use of the batch pipeline, i.e. a process that will run independently compared to the real-time event tracking, and calculate/store aggregates from the data. This could ideally happen with the help of Spark, as it was originally designed.