This workshop walks you through:
- Installing prerequisites
- Creating a local cluster on Docker using Tanzu Community Edition
- Installing Application Toolkit on the cluster
- Running an example software supply chain using Cartographer to move a developer workload from source to deployment.
The chain uses:
- Fluxcd - to poll for new source code commits
- kpack - to build and publish container images
- Harbor - to store and scan container images
- Knative Serving - to run the application
The instructor will provide you with details to log into a VM where you will complete the workshop.
The VM already has some pre-requisites installed:
- Binaries:
docker
- carvel suite, specifically:
vendir
ytt
kapp
- A clone of this repo in
$HOME/workshop
- Environment variables with credentials for Harbor registry
- Change into the directory where this repository is cloned.
cd $HOME/workshop
- Run the following script to install additional dependencies, namely:
kubectl
(withtree
plugin)yq
(for formatting YAML)kp
(kpack CLI)tanzu
andtanzu apps
plugin installers
Note: This script uses Carvel's vendir tool to download the necessary files. You can also see the configuration for the list of files to download in the vendir.yml configuration file.
./download-dependencies.sh
- One of the dependencies that was downloaded is the Tanzu Community Edition CLI release file. Run the following command to complete the installation of the CLI.
./vendir/tce-linux-amd64-v0.11.0/install.sh
- The Tanzu Community Edition CLI is called
tanzu
. Make sure it's properly installed by checking the version.
tanzu version
- Install the apps plugin for the
tanzu
CLI
tanzu plugin install apps --local ./vendir --version v0.6.0
- Create an "unmanaged" Tanzu Community Edition Kubernetes cluster.
tanzu uc create spring-one-tour -p 80:80 -p 443:443
- You can look through the output to get a better sense for the components included in the cluster, namely:
- Package repositories, for simple installation of a curated set of Kubernetes OSS tooling
- kapp-controller, for package lifecycle management
- Calico Container Network Interface (CNI) for container and pod networking
- When the cluster has been created, you can list the package repositories in all namespaces.
tanzu package repository list -A
- You can also check for packages that have been installed. If the status of the
cni
package isReconciling
, wait a few moments and run this command again until the status isReconcile succeeded
.
tanzu package installed list -A
- You can also list other available packages in the
tanzu-package-repo-global
namespace (no need to specify this namespace).
tanzu package available list
- Application Toolkit is a meta-package that contains 6 packages:
Name | Package |
---|---|
Cartographer | cartographer.community.tanzu.vmware.com |
cert-manager | cert-manager.community.tanzu.vmware.com |
Contour | contour.community.tanzu.vmware.com |
Flux CD Source Controller | fluxcd-source-controller.community.tanzu.vmware.com |
Knative Serving | knative-serving.community.tanzu.vmware.com |
kpack | kpack.community.tanzu.vmware.com |
Three of these packages require configuration. You can see the configuration here: values-install-template.yaml.
- Notice that the configuration file contains several environment variables. These have been pre-set on your VM. Check them using the following command.
env | grep KP_
The output should look something like this:
$ env | grep KP_
KP_USERNAME=user001
KP_PASSWORD=some-password
KP_REPO=harbor.tanzu.coraiberkleid.site/user001/kp
- Run the following command to create a final values file with the proper values in place of the variables:
envsubst < values-install-template.yaml > values-install.yaml
- Make sure the new values-install.yaml contains the proper replacement values.
cat values-install.yaml
- Install Application Toolkit.
tanzu package install app-toolkit --package-name app-toolkit.community.tanzu.vmware.com --version 0.1.0 -f values-install.yaml -n tanzu-package-repo-global
- When the installation is complete, verify that all packages were installed and that their status is "Reconcile succeeded."
tanzu package installed list -n tanzu-package-repo-global
In this section, you will create a basic workflow to move an application from source code to deployment, as follows:
get source (fluxcd) --> build image (kpack) --> run (knative serving)
You will automate the workflow using Cartographer to create a software supply chain.
kpack needs a builder in order to turn application source code into OCI images. A builder is an image, compliant with Cloud Native Buildpacks, that provides the base OS images necessary to build and run the application (the "stack"), as well as buildpacks to handle application compilation, dependencies, and other language-specific details (the "store").
You can create the stack, store, and builder using kubectl
and YAML configuration, but in this example, we will use kp
, the kpack CLI.
- Log in to Harbor using the
docker
CLI so thatkp
has access to Harbor credentials.
echo $KP_PASSWORD | docker login -u ${KP_USERNAME} --password-stdin https://harbor.tanzu.coraiberkleid.site
-
Log into the Harbor UI using the same credentials. Verify that there are no images in your user project.
-
Create the ClusterStack.
kp clusterstack save base --build-image paketobuildpacks/build:base-cnb --run-image paketobuildpacks/run:base-cnb
- Create the ClusterStore.
kp clusterstore save default -b gcr.io/paketo-buildpacks/java -b gcr.io/paketo-buildpacks/go
- Create a ClusterBuilder. Notice that it uses a configuration file, kpack-builder-order.yaml, to set the order in which buildpacks will evaluate the application code.
kp clusterbuilder save builder --tag ${IMAGE_PREFIX}builder --stack base --store default --order example/kpack-builder-order.yaml
- Check the Harbor UI. Log in using the same credentials (
env | grep KP_
).
You will see 4 images under the path your-username/kp
—these correspond to the build image and the run image in the stack, as well as the go and java buildpacks in the store.
You will also see the builder image under the path your-username/builder
.
This builder includes the stack and store, and it is the image that kpack will use to build images from application source code.
The Cartographer supply chain will require read/write access to Harbor and to various cluster resources needed to process the workflow. Hence, you need to create proper role-based access control (RBAC) resources first. Take a look at the RBAC configuration provided in the example: ./example/cluster. In this example, the default service account will be granted permission to create the necessary cluster resources, and a separate service account will be used to protect Harbor credentials separately.
This configuration will retrieve credentials from a different set of environment variables. Check them using the following commands.
# For the example supply chain:
env | grep REGISTRY_
env | grep IMAGE_PREFIX
The output should look something like this:
Note: For the workshop, we are using the same registry credentials to create the builder with
kp
and for the supply chain service account to push/pull application images. However, you could choose to use different credentials to enforce more granular access controls.
$ env | grep REGISTRY_
REGISTRY_URL=https://harbor.tanzu.coraiberkleid.site
REGISTRY_USERNAME=user001
REGISTRY_PASSWORD=some-password
$ env | grep IMAGE_PREFIX
IMAGE_PREFIX=harbor.tanzu.coraiberkleid.site/user001/
Run the following command to apply the RBAC configuration to the cluster.
envsubst < values-example-template.yaml > values-example.yaml
Validate that the values have been properly substituted.
cat values-example.yaml
Apply the Cartographer RBAC configuration to the cluster.
kapp deploy --yes -a example-rbac -f <(ytt --ignore-unknown-comments -f example/cluster/ -f values-example.yaml)
Cartographer will automate the flow of applications from source code to deployment using Cartographer-specific resources. In this example, you will use:
- ClusterSupplyChain - to define the sequence of the flow from FluxCD to kpack to Knative Serving, and to map output of one resource as input to the next
- Templates (ClusterSourceTemplate, ClusterImageTemplate, and ClusterTemplate) - to give Cartographer the ability to instantiate and monitor FluxCD, kpack, and Knative Serving resources for each application submitted to the supply chain
Review the templates and the supply chain defined in ./example/app-operator. Notice that:
- Each template contains a parameterized configuration for one of the resources in the example workflow (FluxCD GitRepository, kpack Image, and Knative Serving Service).
- The parameterized values will be injected from a "workload"—this refers to the resource the developer will submit with application-specific details
- Templates differ based on the outputs they produce:
- ClusterSourceTemplate produces a url and revision
- ClusterImageTemplate produces an image (tag)
- ClusterTemplate does not produce any output
- The template configuration does not set the output value; rather, it sets the path to the output value in the corresponding resource's status (e.g. urlPath, not url). Cartographer will take care of retrieving this value and assigning it to the output field.
- The ClusterSupplyChain defines the order of the resources and maps the ouput of one as input to the next.
- The templates (specifically ClusterSourceTemplate for the kpack Image and ClusterTemplate for Knative Serving Service) assign specific output values to keys in the resource configuration.
Run the following command to apply the template and supply chain configurations to the cluster.
kapp deploy --yes -a example-sc -f <(ytt --ignore-unknown-comments -f example/app-operator/ -f values-example.yaml)
With the supply chain fully configured in the cluster, developers can begin to deploy applications using the Cartographer Workload resource. Workloads help provide a clean separation of concerns beteen developers and application operators and focus on isolating the information unique to a developer workload.
You can create Workloads imperatively using the tanzu
CLI, or declaraitvely using kubectl
and YAML configuration.
In this example, you will use the imperative approach.
Run the following command to create a Workload.
Notice that the "type" (web) matches the selector value in the ClusterSupplyChain.
tanzu apps workload create hello-chicago --type web --git-repo https://github.com/ciberkleid/hello-go.git --git-branch main --app hello-chicago --env "HELLO_MSG=chicago" --yes
Note: The supply chain will likely take a few minutes to deploy the application the first time, as kpack needs to download dependencies to build and publish the image. Subsequent runs will leverage cached dependencies and other optimizations to build the image more quickly.
Track the progress of the supply chain workflow.
tanzu apps workload get hello-chicago # Alt: kubectl get workload hello-chicago -o yaml | yq
If the build is still running, you can optionally use the kpack CLI, kp
, to check the progress of the build.
kp build logs hello-chicago # Also: tanzu apps workload tail hello-chicago
You can use the kubectl tree
plugin to see the dependent resources spawned for the Workload.
You should see an App, Image, and GitRepository. The latter two will each have dependent resources as well.
kubectl tree workload hello-chicago
If the Workload status is "Ready," you can check on the Knative Serving Service resource.
kubectl get kservice hello-chicago
Make sure the application is working:
curl http://hello-chicago.default.127-0-0-1.sslip.io
To learn more about the resource Knative Serving creates automatically, run kubectl get all
or use the kubectl tree
plugin as follows.
Knative Serving provides additional functionality (e.g. auto-scaling, ingress configuration and routing, revision management) over and above a simple Deployment and Service, without requiring complex configuration.
kubectl tree kservice hello-chicago
Congratulations! You have installed a Kubernetes cluster with elevated developer-centric platform capabilities and deployed a path to production for a variety of applications!
To learn more, visit the following resources:
- Tanzu Community Edition
- Application Toolkit
- Cartographer
- Cartographer examples
- FluxCD Source Controller
- Cloud Native Buildpacks
- kpack
- Knative Serving
To delete the cluster, run:
tanzu unmanaged-cluster delete spring-one-tour