You as a user can define custom docker image and this image is deployed and managed properly by simple-kubernetes-operator
. Deploying and managing objects in kubernetes are based on Kubernetes Operator pattern and implemented with Kubebuilder.
This project was my experimentation. It never was a real product.
The config/samples/simpleoperator_v1alpha1_simpleoperator.yaml
contains an example for simple-kubernetes-operator
with simple NGINX
image.
All commands must executed at level of git project root
Steps to make simpleoperator
work on a kind
based cluster:
Setup NGINX
for kind
:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Wait until the NGINX
is deployed:
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s
Setup cert-manager
:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.yaml
Install the staging issuer:
kubectl create -f staging-issuer.yml
Deploy simpleoperator
:
kubectl apply -f simpleoperator-0.0.1-deploy-in-cluster.yaml
Test with:
kubectl apply -f config/samples/simpleoperator_v1alpha1_simpleoperator.yaml
kubectl edit so simpleoperator-sample
kubectl delete -f config/samples/simpleoperator_v1alpha1_simpleoperator.yaml
Having installed docker
(engine
version 23.0.1, containerd
version: 1.6.18), kubectl
(v1.26.2), and kind
(v0.17.0) on a Linux based server.
Server has CPU Intel J3455, 8 GB RAM, and having 60 GB free space for /.
Clone or download the repo.
Create cluster with kind
:
kind create cluster --name=simple-operator --config=simple-1-control-2-workers.yaml
If everything goes well, the $HOME/.kube/config
will contain the certificates, context, etc. of simple-operator
as with name kind-simple-operator
.
Just run to verify above statement:
kubectl cluster-info
You must see this:
Kubernetes control plane is running at https://127.0.0.1:36279
CoreDNS is running at https://127.0.0.1:36279/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Now we have a cluster environment.
Reference: phoenixNAP - Guide to Running Kubernetes with Kind
Setup NGINX
for kind
:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Wait until the NGINX
is deployed:
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s
Check what resources are deployed:
kubectl get all --namespace ingress-nginx
Reference: kind - Ingress
My advantage is I own a domain name and the network infrastructre has been already prepared to use it. However, in an ordinary home this is not available, so I would like to give you some hints what you need to do. But before doing anything you need to check wheter your router is behind CGN. Being under CGN makes harder your life to use HTTP-01 challange, either asking your ISP to give public IP address, using the VPN, or switching to DNS-01 challange can help in this case.
If the WAN IP address on your router and your IP address on whatsmyip are not matching, it will mean your router is under CGN.
Hints:
- Set static IP address for the
kind
runner machine in router -> find DHCP server settings on the router and manually assgin IP to MAC address of machine. - Open port 80 & 443 -> find Port Forwading menu and set internal and external ports to 443 and use static IP address for internal IP address.
- Sign up on no-ip and create a No-IP Hostname -> After the login navigate to Dynamic DNS, No-IP Hostnames and Create Hostname. You can use whatever hostname but leave the Record Type on DNS Host (A).
Please do not forget ISP gives you dynamic IP address to your router that may change, so you need to update the IP address of you No-IP Hostname. I don't know wheter there is an automatic way.
Let's use the FQDN e.g.: szykes.ddns.net
in Ingress.
If you don't want to access outside the cluster, then use nip.io just for fun.
Setup cert-manager
:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.yaml
Check what resources are deployed:
kubectl get pods --namespace cert-manager
Install the staging issuer:
kubectl create -f staging-issuer.yml
Check the current status of certificate creation:
kubectl get certificate -o wide
I use staging issuer because I can verify TLS certifcate mechanism in this way without bothering the production side of Let's encrypt
.
If everything goes well, you will see something like this:
Reference:
DigitalOcean - How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes
cert-manager - Troubleshooting Problems with ACME / Let's Encrypt Certificates
Install go
(1.19) & kubebuilder
(3.9.1) at first.
Change directory to git project and execute:
kubebuilder init --domain szikes.io --repo github.com/szikes-adam/simple-kubernetes-operator
kubebuilder create api --group simpleoperator --version v1alpha1 --kind SimpleOperator
- extend manually the api/v1alpha1/simpleoperator_types.go based on kubebuilder - CRD validation
Reference:
kubebuilder - Tutorial: Building CronJob
kubebuilder - Adding a new API
Reference:
The Cluster API Book - Implementer's Guide
Kubernetes: What is "reconciliation"?
Medium - 10 Things You Should Know Before Writing a Kubernetes Controller
kubernetes blog - Using Finalizers to Control Deletion
And so many other pages...
If you made API changes then run:
make manifests
But you can skip the previous step because the following will genreate CRD and install on cluster:
make install
export ENABLE_WEBHOOKS=false
make run
Reference: kubebuilder - Running and deploying the controller
If the manual testing seems ok with make run
then let's jump into the production environment. The most easiest way to do this just push the latest code to GitHub and wait for docker image.
Use GitHub's docker image to deploy:
make deploy IMG=ghcr.io/szykes/simple-kubernetes-operator:main
Let's find where the simpleoperator
is:
kubectl get namespaces
Output:
NAME STATUS AGE
default Active 40m
kube-node-lease Active 40m
kube-public Active 40m
kube-system Active 40m
local-path-storage Active 40m
simple-kubernetes-operator-system Active 28m
The simple-kubernetes-operator-system
seems promising.
What objects are there?
kubectl get all --namespace=simple-kubernetes-operator-system
Output:
NAME READY STATUS RESTARTS AGE
pod/simple-kubernetes-operator-controller-manager-867588699d-68rsz 2/2 Running 0 44m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/simple-kubernetes-operator-controller-manager-metrics-service ClusterIP 10.96.97.87 <none> 8443/TCP 44m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/simple-kubernetes-operator-controller-manager 1/1 1 1 44m
NAME DESIRED CURRENT READY AGE
replicaset.apps/simple-kubernetes-operator-controller-manager-867588699d 1 1 1 44m
Finally, the log of simpleoperator
is here:
kubectl logs --namespace=simple-kubernetes-operator-system pod/simple-kubernetes-operator-controller-manager-867588699d-n8j4p
Delete simpleoperator
:
kubectl delete --namespace=simple-kubernetes-operator-system deployment.apps/simple-kubernetes-operator-controller-manager service/simple-kubernetes-operator-controller-manager-metrics-service
Do a make deploy
at first, if you have not done it. Make sure this is the wanted tag of docker image:
make deploy IMG=ghcr.io/szykes/simple-kubernetes-operator:0.0.1
Build manually the resources:
bin/kustomize build config/default > simpleoperator-0.0.1-deploy-in-cluster.yaml
Deploy based on this, or share with anyone because this is portable:
kubectl apply -f simpleoperator-0.0.1-deploy-in-cluster.yaml
It builds, vets, and runs test using by make
.
Triggered by pushing new commit on main
and pull request.
File location in project:
.github/workflows/ci.yml
Reference:
GitHub - Building and testing Go
banzaicloud/koperator - ci.yml
It builds docker image by using Dockerfile
at the project root.
The images are availble on ghcr.io
.
Building and pushing docker images are triggered by pushing new commit on main
and tag with the following version format '*.*.*'
. For example: 2.10.5
File location in project:
.github/workflows/docker.yml
Reference:
GitHub - Publishing Docker images
banzaicloud/koperator - docker.yml
At first read & do: Creating a personal access token (PAT)
Login with docker on the machine that needs access:
docker login ghcr.io
It will ask for your username on GitHub and your PAT
If everything does well, you will see this:
WARNING! Your password will be stored unencrypted in /home/buherton/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Verifying access by:
docker pull ghcr.io/szykes/simple-kubernetes-operator:0.0.1
You should see similar to this:
main: Pulling from szykes/simple-kubernetes-operator
10f855b03c8a: Pull complete
fe5ca62666f0: Pull complete
b438aade3922: Pull complete
fcb6f6d2c998: Pull complete
e8c73c638ae9: Pull complete
1e3d9b7d1452: Pull complete
4aa0ea1413d3: Pull complete
7c881f9ab25e: Pull complete
5627a970d25e: Pull complete
aefd672debf9: Pull complete
Digest: sha256:48e6d8e4cd8252ba3044a1baae7deac41e1be42d80320c3b27d6fae2f14c4cc0
Status: Downloaded newer image for ghcr.io/szykes/simple-kubernetes-operator:0.0.1
ghcr.io/szykes/simple-kubernetes-operator:0.0.1
Reference: GitHub - Working with the Container registry
Not all areas of this project were deeply investigated and built.
Here is the list that I would do in a next phase:
- See
TODO
s in the code - Have a proper versioning (rc, beta, etc.) for git project and docker image
- Use TLS between within cluster
- Encrypt Secrets