This repository contains the necessary files to run privileged 🚀 but secured 🔒 Cloud Development Environments on OpenShift using Kata Containers.
- A cluster running OpenShift v4.15 or later on bare metal worker nodes with at least a non-admin user (c.f. OpenShift sample install-config.yaml and a script to add a non-admin user).
- OpenShift Sandboxed Containers Operator (c.f. install-ocp-sandbox-operator.sh)
- OpenShift Dev Spaces Operator (c.f. install-ocp-dev-spaces-operator.sh)
- Kyverno (c.f. install-kyverno.sh)
- Create the developer namespace
<user>-devspaces
if it doesn't exist yet - Apply a Kyverno policy - This
allows running privileged Pods, using Kata runtime, in
<user>-che
namespace - Create the privileged ServiceAccount, privileged-sa, and
RoleBinding, privileged-rb, in
<user>-che
namespace. - Configure OpenShift Dev Spaces with
configure-ocp-dev-spaces.sh so that Dev Spaces
uses the SA
privileged-sa
for CDEs Pods. - Start a workspace using a DevWorkspace that uses the following
spec.template.attributes
:controller.devfile.io/runtime-class: kata
andpod-overrides: {"metadata": {"annotations": {"io.kubernetes.cri-o.Devices": "/dev/fuse" }}}
(c.f. devworkspace.yaml).
These steps can be executed using the following commands after cloning this repository:
# Set the namespace name
export NS="<user>-devspaces"
# Create the namespace, the privileged service account, and the Kyverno policy
envsubst < configuration/resources/kustomization.yaml | sponge configuration/resources/kustomization.yaml
kubectl apply -k ./configuration/resources
# Configure OpenShift Dev Spaces
./configuration/configure-ocp-dev-spaces.sh
# Start a workspace that uses VS Code
kubectl apply -f ./tests/vscode.yaml -n $NS
kubectl apply -f ./tests/devworkspace.yaml -n $NS
# Get the IDE URL and open it in a browser
kubectl get dw/privileged-cde -n $NS -o json | jq .status.mainUrl
POD="<cde-podname>"
NS="<user>-che"
kubectl get po -n $NS $POD -o json | jq '.spec.runtimeClassName' # should be `kata`
kubectl get po -n $NS $POD -o json | jq '.spec.serviceAccount' # should be `privsa`
kubectl get po -n $NS $POD -o json | jq '.spec.containers[].securityContext' # privileged etc...
Trying to run a privileged Pod with the default runtime fails (run-privileged-pod-with-runc.sh) but running it with kata (i.e. inside a VM, run-privileged-pod-with-kata.sh) works.
- Remove hard-coded namespace and user
- Use a minimal set of capabilities to make dnf and podman run work
- Use a simple DevWorkspace to start a workspace
- Avoid adding annotation and runtimeclass in the DevWorkspace/Devfile (issues 1 and 2)
- Change the sample to use a modified version of UDI that works with root and podman run