freshflowai / mlflow-tracking

MLFLow Tracking Server containerized

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MLFlow Tracking Server

This repository neatly packages the MLFLow Tracking Server in a Docker container.

Ports

MLFlow Tracking Server runs by default on port 5000. To port forward to localhost:5000:

docker run crmne/mlflow-tracking -p 5000:5000

Volumes

It is recommended that you mount /mlruns and /mlartifacts to persistent storage, e.g.:

docker run crmne/mlflow-tracking -p 5000:5000 -v /mnt/mlflow/mlruns:/mlruns -v /mnt/mlflow/mlartifacts:/mlartifacts

Runs and Artifacts

By default, this container will save the runs in /mlruns/mlruns.db and the artifacts in /mlartifacts, but you can change it by appending the --backend-store-uri and --default-artifact-root options respectively for mlflow server to your docker run. This will allow you to log the runs to files or any database supported by SQLAlchemy, and artifacts to many cloud and network storage services. Example:

docker run crmne/mlflow-tracking -p 5000:5000 --backend-store-uri mysql://scott:tiger@localhost/mlflow --default-artifact-root s3://my-mlflow-bucket/

More information at https://mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers

Test your tracking server

test.py contains an example model to test MLFlow Tracking Server.

  1. Run MLFlow Tracking Server

     docker run crmne/mlflow-tracking -p 5000:5000 -d
    
  2. Install dependencies (you may want to do that in a virtualenv)

     pip install mlflow tensorflow keras
    
  3. Run example model

     python test.py
    

About

MLFLow Tracking Server containerized


Languages

Language:Python 82.2%Language:Dockerfile 17.8%