aws-samples / amazon-eks-machine-learning-with-terraform-and-kubeflow

Distributed training using Kubeflow on Amazon EKS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Distributed Training and Inference on Amazon EKS

This project defines a prototypical solution for distributed training and inference on Amazon Elastic Kubernetes Service (EKS). This project makes use of Kubeflow Training Operators for distributed PyTorch and TensorFlow training, and Kuberay Operators for inference. Inference is also supported on Triton Inference Server.

The primary audience for this project is machine learning researchers, developers, and applied engineers who need to pre-train or fine-tune large language models (LLMs) in the area of generative AI, or train deep neural networks (DNNs) in the area of computer vision.

The solution offers a framework and accelerator agnostic approach to distributed training. It works with popular AI machine learning libraries, for example, Nemo, Hugging Face Accelerate, PyTorch Lightning, DeepSpeed, Megatron-DeepSpeed, Ray Train, NeuronX Distributed, among others. The solution also offers a framework and accelerator agnostic approach to distributed inference. It works with popular inference frameworks such as TensorRT-LLM, and vLLM. The solution supports popular inference libraries such as Ray Serve.

Legacy Note:

This project started as a companion to the Mask R-CNN distributed training blog, and that part of the project is documented in this README.

How does the solution work

The solution uses Terraform to deploy Kubeflow machine learning platform on top of Amazon EKS. Amazon EFS and Amazon FSx for Lustre file-systems are used to store various machine learning artifacts. Typically, code, configuration files, log files, and checkpoints are stored on the EFS file-system. Data is stored on the FSx for Lustre file system. FSx for Lustre file-system is configured to automatically import and export content from and to the configured S3 bucket.

The deployed Kubeflow platform version is 1.8.0, and includes Kubeflow Notebooks, Kubeflow Tensorboard. Kubeflow Pipelines. Kubeflow Katib, and Kubeflow Central Dashboard.

The accelerator machines used for running the training jobs are automatically managed by Karpenter, which means, all machines used in data-preprocessing, and training, are provisioned on-demand.

To launch a data pre-processing or training job, all you need to do is install one of the pre-defined machine-learning charts with a YAML file that defines inputs to the chart. Here is a very quick example that pre-trains Bert model on Glue MRPC dataset using Hugging Face Accelerate. For a heavy weight example, try the example for Llama2 fine-tuning using PyTorch FSDP.

What is in the YAML file

The YAML file is a Helm values file that defines the runtime environment for a data pre-processing, or training job. The key fields in the Helm values file that are common to all charts are described below:

  • The image field specifies the required docker container image.
  • The resources field specifies the required infrastructure resources.
  • The git field describes the code repository we plan to use for running the job. The git repository is cloned into an implicitly defined location under HOME directory, and, the location is made available in the environment variable GIT_CLONE_DIR.
  • The pre_script field defines the shell script executed after cloning the git repository, but before launching the job.
  • There is an optional post-script section for executing post training script.
  • The training launch command and arguments are defined in the train field, and the data processing launch command and arguments are defined in the process field.
  • The pvc field specifies the persistent volumes and their mount paths. EFS and Fsx for Lustre persistent volumes are available by default at /efs and /fsx mount paths, respectively, but these mount paths can be changed.
  • The ebs field specifies optional Amazon EBS volume storage capacity and mount path. By default, no EBS volume is attached.

Prerequisites

  1. Create and activate an AWS Account
  2. Select your AWS Region. For the tutorial below, we assume the region to be us-west-2
  3. Manage your service limits for required EC2 instances. Ensure your EC2 service limits in your selected AWS Region are set to at least 8 each for p3.16xlarge, p3dn.24xlarge, p4d.24xlarge, g5.xlarge, g5.12xlarge, g5.48xlarge, trn1.2xlarge, and trn1.32xlarge instance types. If you use other EC2 instance types, ensure your EC2 service limits are set accordingly.

Getting started

To get started, we need to execute following steps:

  1. Setup the build machine
  2. Use Terraform to create the required infrastructure
  3. Build and Upload Docker Images to Amazon EC2 Container Registry (ECR)
  4. Create home folder on Amazon EFS and Amazon FSx for Lustre shared file-systems

Setup the build machine

For the build machine, we need a machine capable of building Docker images for the linux/amd64 operating system architecture. The build machine will minimally need AWS CLI and Docker installed. The AWS CLI must be configured for Administrator job function. It is highly recommended that you launch an EC2 instance for the build machine.

Clone git repository

Clone this git repository on the build machine using the following commands:

cd ~
git clone https://github.com/aws-samples/amazon-eks-machine-learning-with-terraform-and-kubeflow.git

Install Kubectl

To install kubectl on Linux, execute following commands:

cd ~/amazon-eks-machine-learning-with-terraform-and-kubeflow
./eks-cluster/utils/install-kubectl-linux.sh

For non-Linux, install and configure kubectl for EKS, install aws-iam-authenticator, and make sure the command aws-iam-authenticator help works.

Install Terraform

Install Terraform. Terraform configuration files in this repository are consistent with Terraform v1.1.4 syntax, but may work with other Terraform versions, as well.

Install Helm

Helm is package manager for Kubernetes. It uses a package format named charts. A Helm chart is a collection of files that define Kubernetes resources. Install helm.

Use Terraform to create infrastructure

We use Terraform to create the EKS cluster, and deploy Kubeflow platform.

Enable S3 backend for Terraform

Replace S3_BUCKET with your S3 bucket name, and execute the commands below

cd ~/amazon-eks-machine-learning-with-terraform-and-kubeflow
./eks-cluster/utils/s3-backend.sh S3_BUCKET

Initialize Terraform

cd ~/amazon-eks-machine-learning-with-terraform-and-kubeflow/eks-cluster/terraform/aws-eks-cluster-and-nodegroup
terraform init

Apply Terraform

Not all the AWS Availability Zones in an AWS Region have all the EC2 instance types. Specify at least three AWS Availability Zones from your AWS Region in azs below, ensuring that you have access to your desired EC2 instance types. Replace S3_BUCKET with your S3 bucket name and execute:

terraform apply -var="profile=default" -var="region=us-west-2" -var="cluster_name=my-eks-cluster" -var='azs=["us-west-2a","us-west-2b","us-west-2c"]' -var="import_path=s3://S3_BUCKET/ml-platform"

If you need to use AWS GPU accelerated instances with AWS Elastic Fabric Adapter (EFA), you must specify an AWS Availability Zone for running these instances using cuda_efa_az variable, as shown in the example below:

terraform apply -var="profile=default" -var="region=us-west-2" -var="cluster_name=my-eks-cluster" -var='azs=["us-west-2d","us-west-2b","us-west-2c"]' -var="import_path=s3://S3_BUCKET/ml-platform" -var="cuda_efa_az=us-west-2c"

If you need to use AWS Trainium instances, you must specify an AWS Availability Zone for running Trainium instances using neuron_az variable, as shown in the example below:

terraform apply -var="profile=default" -var="region=us-west-2" -var="cluster_name=my-eks-cluster" -var='azs=["us-west-2d","us-west-2b","us-west-2c"]' -var="import_path=s3://S3_BUCKET/ml-platform" -var="neuron_az=us-west-2d"

Note: Ensure that the AWS Availability Zone you specify for neuron_az or cuda_efa_az variable above supports requested instance types, and this zone is included in the azs variable.

Retrieve static user password

This step is only needed if you plan to use the Kubeflow Central Dashboard, which is not required for running any of the examples and tutorials in this project. The static user's password is marked sensitive in the Terraform output. To show your static password, execute:

terraform output static_password 

Build and Upload Docker Images to Amazon ECR

Before you try to run any examples, or tutorials, you must build and push all the Docker images to Amazon ECR Replace aws-region below, and execute:

  cd ~/amazon-eks-machine-learning-with-terraform-and-kubeflow
  ./build-ecr-images.sh aws-region

Besides building and pushing images to Amazon ECR, this step automatically updates image field in Helm values files in examples and tutorials.

Create home folder on shared file-systems

Attach to the shared file-systems by executing following steps:

cd ~/amazon-eks-machine-learning-with-terraform-and-kubeflow
kubectl apply -f eks-cluster/utils/attach-pvc.yaml  -n kubeflow
kubectl exec -it -n kubeflow attach-pvc -- /bin/bash

Inside the attach-pvc pod, for EFS file-system, execute:

cd /efs
mkdir home
chown 1000:100 home
exit

For Fsx for Lustre file-system, execute:

cd /fsx
mkdir home
chown 1000:100 home
exit

FSx for Lustre File-system Eventual Consistency with S3

FSx for Lustre file-system is configured to automatically import and export content from and to the configured S3 bucket. By default, /fsx is mapped to ml-platform top-level S3 folder in the S3 bucket. This automatic importing and exporting of content maintains eventual consistency between the FSx for Lustre file-system and the configured S3 bucket.

Access Kubeflow Central Dashboard (Optional)

If your web browser client machine is not the same as your build machine, before you can access Kubeflow Central Dashboard in a web browser, you must execute following steps on the your client machine:

  1. install kubectl client

  2. Enable IAM access to your EKS cluster. Before you execute this step, it is highly recommended that you backup your current configuration by executing following command on your build machine:

    kubectl get configmap aws-auth -n kube-system -o yaml > ~/aws-auth.yaml

After you have enabled IAM access to your EKS cluster, open a terminal on your client machine and start kubectl port-forwarding by using the local and remote ports shown below:

sudo kubectl port-forward svc/istio-ingressgateway -n ingress 443:443

Note: Leave the terminal open.

Next, modify your /etc/hosts file to add following entry:

127.0.0.1 	istio-ingressgateway.ingress.svc.cluster.local

Open your web browser to the KubeFlow Central Dashboard URL to access the dashboard. For login, use the static username user@example.com, and retrieve the static password from terraform.

Use Terraform to destroy infrastructure

If you want to preserve any content from your EFS file-system, you must upload it to your Amazon S3 bucket, manually. The content stored on the FSx for Lustre file-system is automatically exported to your Amazon S3 bucket under the ml-platform top-level folder.

Please verify your content in Amazon S3 bucket before destroying the infrastructure. You can recreate your infrastructure using the same S3 bucket.

To destroy all the infrastructure created in this tutorial, execute following commands:

cd ~/amazon-eks-machine-learning-with-terraform-and-kubeflow/eks-cluster/terraform/aws-eks-cluster-and-nodegroup

terraform destroy -var="profile=default" -var="region=us-west-2" -var="cluster_name=my-eks-cluster" -var='azs=["us-west-2a","us-west-2b","us-west-2c"]'

(Optional) Launch EC2 instance for the build machine

To launch an EC2 instance for the build machine, you will need Administrator job function access to AWS Management Console. In the console, execute following steps:

  1. Create an Amazon EC2 key pair in your selected AWS region, if you do not already have one

  2. Create an AWS Service role for an EC2 instance, and add AWS managed policy for Administrator access to this IAM Role.

  3. Launch a m5.xlarge instance from Amazon Linux 2 AMI using the IAM Role created in the previous step. Use 200 GB for Root volume size.

  4. After the instance state is Running, connect to your linux instance as ec2-user. On the linux instance, install the required software tools as described below:

     sudo yum install -y docker git
     sudo systemctl enable docker.service
     sudo systemctl start docker.service
     sudo usermod -aG docker ec2-user
     exit
    

Now, reconnect to your linux instance.

Contributing

See CONTRIBUTING for more information.

Security

See CONTRIBUTING for more information.

License

See the LICENSE file.

About

Distributed training using Kubeflow on Amazon EKS

License:Apache License 2.0


Languages

Language:HCL 45.2%Language:Shell 19.3%Language:Jupyter Notebook 15.5%Language:Dockerfile 13.2%Language:Python 6.8%