Shreyanand / elyra-aidevsecops-tutorial

AIDevSecOps with Thoth and Elyra on ODH/Openshift running on Operate First

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

<script async src="https://www.googletagmanager.com/gtag/js?id=G-Q5C9M1MD3C"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-Q5C9M1MD3C'); </script>

Elyra AIDevSecOps Tutorial

This tutorial is used to discuss the interface between data science and DevOps using project templates, pipelines and bots. It looks to highlight that data scientists are not so different from developers, and many MLOps practices and tools can be enhanced by DevSecOps techniques.

The demo application used is the "hello world" for AI: MNIST Classification.

Environment required

This tutorial has the following environment requirements to be run. If you're running on Project Meteor which uses the Operate First environment, the environment requirements are already setup for you (see note below).

  • Open Data Hub v1.0,
  • Openshift (Enterprise Kubernetes),
  • Cloud Object Storage (e.g. Ceph, minio).
  • Tekton, used in CI/CD systems, to run pipelines created by humans or machines.
  • ArgoCD, used for Continuous Deployment of your applications.
  • Tutorial container image:
jupyterhub==1.2.1
jupyterlab>=3.*
elyra>=2*,<3.*
jupyterlab-requiremnts>=0.10.9

Operate First Open Environment

Operate First is an open infrastructure environment started at Red Hat's Office of the CTO. It has been selected to run this tutorial since it is an open source initiative that fulfills all the requirements stated above. Anyone with a Google account can log in and start developing. To learn more about Operate First, visit the website or GitHub community.

Tools

In this tutorial the following technologies are going to be used:

  • JupyterHub to launch images with Jupyter tooling
  • Elyra supplies a set of extensions to JupyterLab notebooks to support AI projects
  • Project Thoth extension for dependency management on JupyterLab
  • Kubeflow Pipelines for end to end experiments using pipelines

GitOps for reproducibility, portability, traceability with AI support

Nowadays, developers (including data scientists) use Git and GitOps practices to store and share code on development platforms such as GitHub. GitOps best practices allow for reproducibility and traceability in projects.

One of the most important requirements for reproducibility is dependency management. Having dependencies clearly stated allows portability of notebooks, so they can be shared safely with others and reused in other projects.

If you want to know more about this issue in the data science domain, have a look at this article or this video.

Project Thoth keeps dependencies up to date by giving recommendations for a developer's daily tools. Thanks to this tooling, developers (including data scientists) do not have to worry about managing the dependencies after they are selected, since conflicts can be handled by Thoth bots and automated pipelines. Having this AI support can benefit AI projects, offering improvements such as performance improvements due to optimized dependencies and additional security since insecure libraries cannot be introduced. If you want to know more, have a look at this repo or Thoth's website.

Automated pipelines and bots for your GitHub project

  • Kebechet Bot to keep your dependencies fresh and up to date receiving recommendations and justifications using AI.

  • AICoE Pipeline to support your AI project lifecycle.

All these tools are integrated with the project template, so most additions are already set for you. These bots and pipelines exist to automate many of the manual GitOps tasks. For example, in order to deploy your application, you may need to create a container image. GitHub templates integrated with bots can provide you with automated pipelines triggered depending on what you need (e.g. release (patch, minor, major), deliver a container image, or update your dependencies).

Project templates

The project template used can be found here: project template. It shows correlation between data scientist needs (e.g. data, notebooks, models) and AI DevOps engineers ones (e.g. manifests). Having structure in a project ensures all the pieces required for the ML and DevOps lifecycles are present and easily discoverable.

Tutorial Steps

  1. Pre-requisities

ML Lifecycle/Source Lifecycle

  1. Setup your initial environment

  2. Explore notebooks and manage dependencies

  3. Push changes to GitHub

  4. Setup bots and pipelines to create releases, build images and enable dependency management

  5. Create an AI Pipeline

  6. Run and debug AI Pipeline

DevOps Lifecycle

  1. Deploy Inference Application

  2. Test Deployed inference application

  3. Monitor your inference application deployed

References

About

AIDevSecOps with Thoth and Elyra on ODH/Openshift running on Operate First

License:GNU General Public License v3.0


Languages

Language:Jupyter Notebook 87.6%Language:Python 12.2%Language:Gherkin 0.2%