datamass-io / ml-kraken

Machine-Learning orchestration framework. Cloud-based models management environment.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ml-kraken

ML-Kraken is a fully cloud-based solution designed and built to improve models management process. Each math model is treated as a separate network service that can be exposed under IP address and defined port. This approach is called MaaS (model as a service).

ML-kraken combines two pieces, backend based on serverless solution and frontend which is an Angular based app. As a native execution platform, we decided to use AWS public cloud.

ELK

Main features

  • R/Spark/Python models deployment
  • MaaS (model as a service) - networked models
  • models exposed under REST services
  • easy integration with real-time analytical systems
  • environment that easly scale-up
  • ability to keep many model versions in one place
  • jupyterhub plugin to communicate with ML-Kraken

Demo

demo

Requirements

  • AWS account created
  • serverless installed
  • npm installed

Cloud install

Before running build-and-deploy.sh script setup access to your AWS account. The serverless framework looks for .aws folder on your machine.

git clone git@github.com:datamass-io/ml-kraken.git
cd ./ml-kraken
./build-deploy.sh

UI elements

Models table

This is the main part of ML Kraken where all created models are stored. It allows to:

  • add/star/stop models
  • view the response time of individual calculations in the model
  • view logs for each model

model_table

Functions of selected table fragments:

  1. The button opens the form for adding a new model
  2. Clicking this button allows you to copy the model id. Useful for quickly pasting id into the created query
  3. Allows filtering of models in the table
  4. Model status - determines whether the container responsible for a given model is running
  5. Model start/stop button. Allows to start or stop the container associated with the given model
  6. Button that opens the model log view
  7. Opens the graph of response time to given calculations in the model
  8. Opens the form for editing model parameters
  9. Selects the displayed columns in the main table
  10. Refreshes the model table

Logs table

This table stores entries about what operations were performed on the backend side. It is also possible to view the request and response history.

logs_table

actions_table

The fragments marked in the picture are designed to:

  1. Allow filtering of logs in the table
  2. Change the type of displayed logs from/to backend logs to/from request and response logs
  3. Display log details

Model chart

ML Kraken allows to display a graph of response time versus time. On its basis, you can determine the current model load and the level of complexity of the calculations. An example chart is presented below.

model_chart

Running a simple R model

Each ML-Kraken model created has an assigned docker container from Amazon ECS. As a result, it is possible to run independent models that can be addressed with REST queries. After clicking the model start button, it takes a while for the container to become operational and establish an external IP.

model_run

Performing calculations on a running model requires sending a POST request with the body containing data in JSON format. Example POST request using Postman:

request

As above, JSON should contain:

  • modelData - parameters which are send to model as a input
  • metaData - requires specifying the model id to send a request to the correct model

After the calculations are finished, a new point will be visible on the graph showing the time from sending the request to the response.

graph

Contributing

We happily welcome contributions to ML-Kraken. Please contact us in case of any questions/concerns.

About

Machine-Learning orchestration framework. Cloud-based models management environment.


Languages

Language:TypeScript 47.4%Language:JavaScript 31.0%Language:HTML 11.4%Language:CSS 10.2%Language:Shell 0.0%