mpeder / ContainersOnAzure_IntroLab

This lab aims to show a few ways you can deploy a container on Azure

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Intro to Containers on Azure

This lab aims to show how you can quickly deploy container workloads to Azure.

What is it?

This intro lab serves to guide you on two of, the many ways, you can deploy a container on Azure, namely:

Technology used

  • Our container contains a swagger enabled API developed in Go which writes a simple order via json to your specified Cosmos DB and tracks custom events via Application Insights.

Preparing for this lab

For this Lab you will require Windows machine with the following:

If you are using an older Windows version we recommend you using an Ubuntu VM, connect to it using SSH, and run the Docker commands there. You can still run the Azure CLI locally. If you don't have and SSH tool installed you can get Putty Putty.

When using the Azure CLI, you first have to log in. You do this by running the command below, and follow the instructions. This will guide you to open a browser and enter a code, after which you can enter your credentials to start a session.

az login

After logging in, if you have more than one subscripton you may need to set the default subscription you wish to perform actions against. To see the current default subscription, you can run az account show.

az account set --subscription "<your requried subscription guid>"

0. Provisioning an Ubuntu VM (optional)

This is an optional step and only recommended if you are not running Windows 10:

  • Go to https://portal.azure.com
  • All Services
  • Find Ubuntu Server 16.04 LTS
    • Provide a username and password (you will need this later when connecting using SSH)
    • Location: West Europe
      • Subscription: Type in a unqiue name
    • VM type: B2s
    • No need for optional settings
  • Wait for the server to be created - you can get started on the next steps in the lab and return. here after 5-10 mins... An altert will also pop-up in the portal when deployment is completed.
  • Login to the Ubunto server using your SSH tool
    • Install Docker by running the commands below (official guide here):
sudo apt-get update

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

sudo apt-get update

sudo apt-get install docker-ce

To validate that your Docker Engine is running execute the following command:

sudo docker run hello-world

You should get an output stating something like: "Hello from Docker! This message shows that your installation appears to be working correctly."

1. Provisioning a Cosmos DB instance

Let's start by creating a Cosmos DB instance in the portal, this is a quick process. Navigate to the Azure portal and create a new Azure Cosmos DB instance, enter the following parameters:

  • ID: <unique db instance name>
  • API: Select MongoDB as the API as our container API will use this driver
  • ResourceGroup: <unique resource group name>
  • Location: West Europe

See below: alt text

Once the DB is provisioned, we need to get the Database Username and Password, these may be found in the Settings --> Connection Strings section of your DB. We will need these to run our container, so copy them for convenient access. See below:

alt text

2. Provisioning an Application Insights instance

In the Azure portal, select create new Application Insights instance, enter the following parameters:

  • Name: <unique instance name>
  • Application Type: General
  • ResourceGroup: <resource group you created in step 1>
  • Location: West Europe

See below: alt text

Once Application Insights is provisioned, we need to get the Instrumentation key, this may be found in the Configure section under Properties. We will need this to run our container, so copy it for convenient access. See below:

alt text

3. Provisioning an Azure Container Registry instance

If you would like an example of how to setup an Azure Container Registry instance via ARM, have a look here

Navigate to the Azure Portal and select create new Azure Container Registry, enter the following parameters:

  • Registry Name: <unique instance name>
  • ResourceGroup: <resource group you created in step 1>
  • Location: West Europe
  • Admin User: Enable
  • SKU: Basic

See below: alt text

4. Pull the container to your environment and set the environment keys

Open up your docker command window and type the following:

WINDOWS:

docker pull beermug/go_order_sb

LINUX:

sudo docker pull beermug/go_order_sb

We will now test the image locally to ensure that it is working and connecting to our CosmosDB and Application Insights instances. The keys you copied for the DB and App Insights keys are set as environment variables within the container, so we will need to ensure we populate these.

The environment keys that need to be set are as follows:

  • DATABASE: <your cosmodb username from step 1>
  • PASSWORD: <your cosmodb password from step 1>
  • INSIGHTSKEY: <you app insights key from step 2>
  • SOURCE: This is a free text field which we will use specify where we are running the container from. E.g. you can use the values localhost, ACI and AKS for your labs.

So to run the container on your local machine, enter the following command, substituting your environment variable values:

WINDOWS:

docker run --name go_order_sb -p 8080:8080 -e DATABASE="<your cosmodb username from step 1>" -e PASSWORD="<your cosmodb password from step 1>" -e INSIGHTSKEY="<you app insights key from step 2>" -e SOURCE="localhost" --rm -i -t beermug/go_order_sb

LINUX:

sudo docker run --name go_order_sb -p 8080:8080 -e DATABASE="<your cosmodb username from step 1>" -e PASSWORD="<your cosmodb password from step 1>" -e INSIGHTSKEY="<you app insights key from step 2>" -e SOURCE="localhost" --rm -i -t beermug/go_order_sb

Note, the application runs on port 8080 which we will bind to the host as well. If you are running on Windows, select 'Allow Access' on Windows Firewall.

If all goes well, you should see the application running on localhost:8080, see below: alt text

On Windows you can navigate to localhost:8080/swagger and test the api (use Chrome or Firefox). Select the 'POST' /order/ section, select the button "Try it out" and enter some values in the JSON provided and select "Execute", see below: alt text

On Linux you can use curl (you might need an additional terminal for this and change the values in the JSON):

curl -X POST "http://localhost:8080/v1/order/" -H  "accept: application/json" -H  "content-type: application/json" -d "{  \"EmailAddress\": \"string\",  \"ID\": \"string\",  \"PreferredLanguage\": \"string\",  \"Product\": \"string\",  \"Source\": \"string\",  \"Total\": 0}"

If the request succeeded, you will get a CosmosDB Id returned for the order you have just placed, see below: alt text

We can now go and query CosmosDB to check our entry there, in the Azure portal, navigate back to your Cosmos DB instance and go to the section Data Explorer. We can now query for the order we placed. A collection called 'orders' will have been created within your database, you can then apply a filter for the id we created, namely:

{"id":"5995b963134e4f007bc45447"}

See below:

alt text

5. Retag the image and upload it your private Azure Container Registry

Navigate to the Azure Container Registry instance you provisioned within the Azure portal. Click on the Quick Start blade, this will provide you with the relevant commands to upload a container image to your registry, see below:

alt text

Now we will push the image up to the Azure Container Registry, enter the following (from the quickstart screen):

WINDOWS:

docker login <yourcontainerregistryinstance>.azurecr.io

LINUX:

sudo docker login <yourcontainerregistryinstance>.azurecr.io

To get the username and password, navigate to the Access Keys blade, see below:

alt text

You will receive a 'Login Succeeded' message. Now type the following:

WINDOWS:

docker tag beermug/go_order_sb <yourcontainerregistryinstance>.azurecr.io/go_order_sb
docker push <yourcontainerregistryinstance>.azurecr.io/go_order_sb

LINUX:
sudo docker tag beermug/go_order_sb <yourcontainerregistryinstance>.azurecr.io/go_order_sb
docker push <yourcontainerregistryinstance>.azurecr.io/go_order_sb

Once this has completed, you will be able to see your container uploaded to the Container Registry within the portal, see below:

alt text

6. Deploy the container to Azure Container Instance

Now we will deploy our container to Azure Container Instances.

We will deploy our container instance via the Azure CLI directly. You can use your Windows machine for this, but of course the CLI is cross platform so you can also install on Mac or Linux.

az container create -n go-order-sb -g <myResourceGroup> -e DATABASE=<your cosmodb username from step 1> PASSWORD=<your cosmodb password from step 1> INSIGHTSKEY=<your app insights key from step 2> SOURCE="ACI"--image <yourcontainerregistryinstance>.azurecr.io/go_order_sb:latest --registry-password <your acr admin password> --memory 1 --cpu 1 --dns-name-label <unique dns prefix> --ports 8080

You can check the status of the deployment by issuing the container list command:

az container show -n go-order-sb -g <myResourceGroup> -o table

Once the container has moved to "Succeeded" state you will see your external IP address under the "IP:ports" column, copy this value and navigate to http://yourACIExternalIP:8080/swagger and test your API like before.

7. Deploy the container to an Azure Managed Kubernetes Cluster (AKS)

Here we will deploy a Kubernetes cluster using Azure CLI

Enabling AKS preview for your Azure subscription

While AKS is in preview, creating new clusters may require a feature flag on your subscription. You may request this feature for any number of subscriptions that you would like to use. To check if the feature is enabled, run the following command.

az provider show -n Microsoft.ContainerService -o table

Use the az provider register command to register the AKS provider, if it is not already registered:

az provider register -n Microsoft.ContainerService --debug

After registering, you are now ready to create a Kubernetes cluster with AKS.

Create Kubernetes cluster

Use the az aks create command to create an AKS cluster. The following example creates a cluster named myAKSCluster with three nodes. This takes about 10 minutes, so grab a cup of coffee, read the documentation: https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes or watch a video: https://channel9.msdn.com/Shows/Azure-Friday/Container-Orchestration-Simplified-with-Managed-Kubernetes-in-Azure-Container-Service-AKS...

az aks create --resource-group <myResourceGroup> --location westeurope --name myAKSCluster --node-count 3 --generate-ssh-keys --debug

To manage a Kubernetes cluster, use kubectl, the Kubernetes command-line client.

If you're using Azure Cloud Shell, kubectl is already installed. You should already have installed the kubectl as part of the pre-reqa, if not you can use this command:

az aks install-cli

To configure kubectl to connect to your Kubernetes cluster, use the az aks get-credentials command. This step downloads credentials and configures the Kubernetes CLI to use them.

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

To verify the connection to your cluster, use the kubectl get command to return a list of the cluster nodes. Note that this can take a few minutes to appear.

kubectl get nodes

Register our Azure Container Registry within Kubernetes

We now want to register our private Azure Container Registry with our Kubernetes cluster to ensure that we can pull images from it. Enter the following within your command window:

kubectl create secret docker-registry <yourcontainerregistryinstance> --docker-server=<yourcontainerregistryinstance>.azurecr.io --docker-username=<your acr admin username> --docker-password=<your acr admin password> --docker-email=<youremailaddress.com>

To view the Kubernetes dashboard, start the proxy from command line and access the dashboard from your browser on http://localhost:8001/ui/

kubectl proxy

In the Kubernetes dashboard you should now see this created within the secrets section:

alt text

Associate the environment variables with container we want to deploy to Kubernetes

We will now deploy our container via a yaml file go_order_sb.yaml, which is here but before we do, we need to edit this file to ensure we set our environment variables and ensure that you have set your private Azure Container Registry correctly:

spec:
      containers:
      - name: goordersb
        image: <containerregistry>.azurecr.io/go_order_sb
        env:
        - name: DATABASE
          value: "<your cosmodb username from step 1>""
        - name: PASSWORD
          value: "<your cosmodb password from step 1>""
        - name: INSIGHTSKEY
          value: ""<you app insights key from step 2>""
        - name: SOURCE
          value: "K8"
        ports:
        - containerPort: 8080
      imagePullSecrets:
        - name: <yourcontainerregistry>

Once the yaml file has been updated, we can now deploy our container. Within the command line enter the following:

kubectl create -f ./<your path>/go_order_sb.yaml

You should get a success message that a deployment and service has been created. Navigate back to the Kubernetes dashboard and you should see the following:

Your deployments running

alt text

Your three pods

alt text

Your service and external endpoint

alt text

Note that it might take a couple of minutes to get the external IP in place. Once ready you can navigate to http://yourk8serviceendpoint:8080/swagger and test your API

View container telemetry in Application Insights

The container we have deployed writes simple events to Application Insights with a time stamp but we could write much richer metrics. Application Insights provides a number of prebuilt dashboards to view application statistics alongside a query tool for getting deep custom insights. For the purposes of this intro we will simply expose the custom events we have tracked, namely the commit to the Azure CosmosDB.

In portal navigate to the Application Insights instance you provisioned and click 'Metrics Explorer', see below:

alt text

Click edit on one of the charts, select a TimeRange and set the Filters to the event names. This will retrieve all of the writes to CosmosDB, see below:

alt text

Finally, for more powerful queries, select the 'Analytics' button, see below:

alt text

8. Clean up the resources you have created

If you don't want to keep using the services you have created during this lab you can go to the portal and delete the entire resource group. As you have seen you can very quickly create them again. Azure is awesome!!! :-)

About

This lab aims to show a few ways you can deploy a container on Azure

License:MIT License