The setup is deployed on 3 virtual machines, but should run on real hardware as well.
In this setup I use:
- Docker
- Kubernetes
- Kafka
- GitLab CI
- Prometheus
- Grafana
- CPU: Intel Core i7-4790 3.6 GHz 4 cores / 8 threads
- RAM: 32 Gb
- Windows 7 x64
Virtual machine 1 (Kubernetes master node):
- Ubuntu 18.04 x64
- 8Gb RAM
- 100 Gb Storage
- 4 cores
- Intel-VT enabled
- Network: NAT
- IP: 192.168.217.155
Virtual machine 2 (Kubernetes worker node 1):
- Ubuntu 18.04 x64
- 6Gb RAM
- 100 Gb Storage
- 4 cores
- Intel-VT enabled
- Network: NAT
- IP: 192.168.217.156
Virtual machine 3 (Kubernetes worker node 2):
- Ubuntu 18.04 x64
- 6Gb RAM
- 100 Gb Storage
- 4 cores
- Intel-VT enabled
- Network: NAT
- IP: 192.168.217.157
Master & Workers
First of all, let's update system on every virtual machine:
sudo apt-get update && sudo apt-get upgrade
Next read IP addresses of your virtual machines:
hostname -I | awk '{print $1}'
Then set hosts
sudo bash -c 'echo "192.168.217.155 kube-master" >> /etc/hosts'
sudo bash -c 'echo "192.168.217.156 kube-worker" >> /etc/hosts'
sudo bash -c 'echo "192.168.217.157 kube-worker2" >> /etc/hosts'
In order for Kubernetes to work we need to switch off swap:
sudo sed -i '/swapfile/d' /etc/fstab
sudo bash -c 'echo "3" > /proc/sys/vm/drop_caches'
sudo swapoff -a
sudo rm -f /swapfile
Master
hostnamectl set-hostname kube-master
Worker 1
hostnamectl set-hostname kube-worker
Worker 2
hostnamectl set-hostname kube-worker2
Master & Workers
sudo apt-get install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker
Master
This will be used to store locally all images downloaded from the Internet. We can only download from it.
sudo docker run -e REGISTRY_STORAGE_DELETE_ENABLED="true" -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io -d -p 5000:5000 --restart=always --name registry-map2 registry:2
This will be used to store locally all images build by our system. We will upload and download from it.
sudo docker run -e REGISTRY_STORAGE_DELETE_ENABLED="true" -d -p 6000:5000 --restart=always --name registry registry:2
Master & Workers
sudo touch /etc/docker/daemon.json
sudo bash -c 'echo "{\"registry-mirrors\":[\"http://kube-master:5000\"],\"insecure-registries\":[\"kube-master:5000\",\"kube-master:6000\"]}" >> /etc/docker/daemon.json'
sudo bash -c 'echo "DOCKER_OPTS=\"--config-file=/etc/docker/daemon.json\"" >> /etc/default/docker'
sudo systemctl restart docker
To see Docker status run:
sudo docker system info
Master & Workers
sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo sysctl net/netfilter/nf_conntrack_max=524288
sudo apt-get update && sudo apt-get install -y kubelet=1.24.3-00 kubeadm=1.24.3-00 kubectl=1.24.3-00
Master
sudo kubeadm init --control-plane-endpoint kube-master:6443 --pod-network-cidr 192.168.150.0/23 --upload-certs
At this step kubeadm will output a command to make worker nodes join the cluster starting with sudo kubeadm join. Save this command.
The join token will live for 24 hours, so when you need to generate another one, use these commands:
kubeadm token list
kubeadm token create --print-join-command
Now going on with the setup.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can start another terminal to watch for changes:
watch kubectl get pods --all-namespaces
Workers
Run the worker node join command you saved before, but remember to add "sudo" in the beginning, it will look something like
sudo kubeadm join kube-master:6443 --token __some_token__ \
--discovery-token-ca-cert-hash sha256:__some_hash_code__
Master
curl -s https://docs.projectcalico.org/manifests/calico.yaml | \
sed \
-e 's| # - name: CALICO_IPV4POOL_CIDR| - name: CALICO_IPV4POOL_CIDR|g' \
-e "s| # value: \"192.168.0.0/16\"| value: \"192.168.150.0/23\"|g" \
> calico.yaml
kubectl apply -f calico.yaml
kubectl get nodes
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
kubectl proxy --address="kube-master" -p 8001 --accept-hosts='^*$'
Then open http://kube-master:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ in a browser
Master & Workers
Latest versions of Kubernetes use ContainerD, so we need to configure insecure http image registries in it.
sudo mkdir /etc/containerd
sudo gedit /etc/containerd/config.toml
Now set the registries' urls and save:
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."kube-master:5000"]
endpoint = ["http://kube-master:5000"]
[plugins.cri.registry.mirrors."kube-master:6000"]
endpoint = ["http://kube-master:6000"]
[plugins.cri.registry.configs]
[plugins.cri.registry.configs."kube-master:5000".tls]
insecure_skip_verify = true
[plugins.cri.registry.configs."kube-master:6000".tls]
insecure_skip_verify = true
Now restart ContainerD:
sudo systemctl restart containerd
sudo kubeadm reset
rm -rf /etc/systemd/system/kubelnet.service.d
rm -rf $HOME/.kube/config
sudo apt-get remove --purge kubelet kubeadm kubectl
Ref: https://snourian.com/kafka-kubernetes-strimzi-part-1-creating-deploying-strimzi-kafka/ Ref: https://github.com/nrsina/strimzi-kafka-tutorial
git clone -b 0.30.0 https://github.com/strimzi/strimzi-kafka-operator.git
cd strimzi-kafka-operator
sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
kubectl create namespace kafka
kubectl create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount kafka:strimzi-cluster-operator
kubectl create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount kafka:strimzi-cluster-operator
kubectl create clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --clusterrole=strimzi-topic-operator --serviceaccount kafka:strimzi-cluster-operator
kubectl apply -f install/cluster-operator -n kafka
kubectl get deployments -n kafka
cp examples/kafka/kafka-ephemeral.yaml examples/kafka/kafka-ephemeral-2.yaml
gedit examples/kafka/kafka-ephemeral-2.yaml
Replace:
- 2 > 1
- 3 > 2
Add in spec->kafka->config:
auto.create.topics.enable: "true"
delete.topic.enable: "true"
Add in spec->kafka->listeners:
- name: external
port: 9094
type: nodeport
tls: false
Then do:
kubectl apply -f examples/kafka/kafka-ephemeral-2.yaml -n kafka
kubectl get deployments -n kafka
gedit kafka-topic.yaml
Set:
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: my-topic
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824
kubectl apply -f kafka-topic.yaml -n kafka
kubectl get svc -n kafka
kubectl run kafka-producer -ti --image=strimzi/kafka:0.20.0-rc1-kafka-2.6.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap.kafka:9092 --topic my-topic
kubectl run kafka-consumer -ti --image=strimzi/kafka:0.20.0-rc1-kafka-2.6.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap.kafka:9092 --topic my-topic --from-beginning
To user Kafka externally:
gedit kafka-external.yaml
Set:
apiVersion: v1
kind: Service
metadata:
name: my-cluster-kafka-external-bootstrap
spec:
type: NodePort
selector:
app.kubernetes.io/name: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 9094
targetPort: 9094
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30825
kubectl apply -f kafka-external.yaml -n kafka
To see the designated port, use:
kubectl get service --namespace kafka | grep external
You will get something like:
my-cluster-kafka-external-bootstrap NodePort 10.106.223.28 <none> 9094:31318/TCP 18m
So 31318 is the port you need.
To test Kafka external either use Kafka package:
cd ~
git clone -b 3.2.0 https://github.com/apache/kafka.git
cd kafka
./gradlew jar -PscalaVersion=2.13.6
bin/kafka-console-producer.sh --broker-list kube-master:31318 --topic my-topic
bin/kafka-console-consumer.sh --bootstrap-server kube-master:31318 --topic my-topic --from-beginning
Or use Kafka Cat:
sudo apt install -y kafkacat
echo "hello world!" | kafkacat -P -b kube-master:31318 -t my-topic
kafkacat -C -b kube-master:31318 -t my-topic
cd ~
git clone https://github.com/nrsina/strimzi-kafka-tutorial.git
cd strimzi-kafka-tutorial/strimzi-producer
sudo docker build -t nrsina/strimzi-producer:v1 .
sudo docker tag nrsina/strimzi-producer:v1 kube-master:6000/strimzi-producer:v1
sudo docker push kube-master:6000/strimzi-producer:v1
gedit deployment/deployment.yml
Set:
image: kube-master:6000/strimzi-producer:v1
imagePullPolicy: IfNotPresent
- name: SP_SLEEP_TIME_MS
value: "2000ms"
kubectl apply -f deployment/deployment.yml
kubectl logs -f strimzi-producer-deployment-7655d6c9d7-jjnfx
sdk install sbt
cd ~
cd strimzi-kafka-tutorial/strimzi-consumer
sudo chmod 666 /var/run/docker.sock
sbt docker:publishLocal
sudo docker tag nrsina/strimzi-consumer:v1 kube-master:6000/strimzi-consumer:v1
sudo docker push kube-master:6000/strimzi-consumer:v1
gedit deployment/deployment.yml
Set:
replicas: 2
image: kube-master:6000/strimzi-consumer:v1
imagePullPolicy: IfNotPresent
kubectl apply -f deployment/deployment.yml
kubectl logs -f strimzi-consumer-deployment-f86469b6-c9cbt
Open configurations:
gedit ~/strimzi-kafka-operator/examples/metrics/kafka-metrics.yaml
gedit ~/strimzi-kafka-operator/examples/kafka/kafka-ephemeral-2.yaml
Copy from kafka-metrics.yaml to kafka-ephemeral-2.yaml:
- metrics from spec->kafka->metricsConfig
- metrics from spec->zookeeper->metricsConfig
- all starting with
---
kind: ConfigMap
...
kubectl apply -f ~/strimzi-kafka-operator/examples/kafka/kafka-ephemeral-2.yaml -n kafka
cd ~ && mkdir prometheus
cd ~/prometheus
curl -s https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml > bundle.yaml
gedit bundle.yaml
Now replace:
namespace: default -> namespace: monitoring
kubectl create namespace monitoring
kubectl apply -f bundle.yaml -n monitoring --force-conflicts=true --server-side
kubectl get pods -n monitoring
kubectl get svc -n monitoring
cd ~/strimzi-kafka-operator/examples/metrics/prometheus-additional-properties
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml -n monitoring
kubectl apply -f prometheus-additional.yaml -n monitoring
cd ~/strimzi-kafka-operator/examples/metrics/prometheus-install
gedit strimzi-pod-monitor.yaml
Change:
myproject -> kafka
kubectl apply -f strimzi-pod-monitor.yaml -n monitoring
gedit prometheus.yaml
Change:
namespace: myproject -> namespace: monitoring
kubectl apply -f prometheus-rules.yaml -n monitoring
kubectl apply -f prometheus.yaml -n monitoring
kubectl get pods -n monitoring
cd ~/strimzi-kafka-operator/examples/metrics/grafana-install
kubectl apply -f grafana.yaml -n monitoring
kubectl port-forward svc/grafana 3000:3000 -n monitoring
- Open http://localhost:3000
- Login with Username/password: admin
- Add Prometheus as a new Data Source.
- Set URL as http://prometheus-operated:9090
- Inside the Settings tap, you need to enter Prometheus address
kubectl get svc -n monitoring
Use addresses like:
- http://prometheus-operated:9090
- http://prometheus-operated.monitoring:9090
- http://prometheus-operator.monitoring.svc.cluster.local:9090
Import these files through the Grafana webpage (select Prometheus datasource while importing):
- ~/strimzi-kafka-operator/examples/metrics/grafana-dashboards
- strimzi-kafka.json
- strimzi-kafka-exporter.json
- strimzi-operators.json
- strimzi-zookeeper.json
Remove GitLab if already present:
sudo docker rm -f gitlab
Now install:
export GITLAB_HOME=/srv/gitlab
sudo docker run --detach \
--hostname kube-master \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
--shm-size 256m \
gitlab/gitlab-ee:latest
To view GitLab logs:
sudo docker logs -f gitlab
Export GitLab address:
export GITLAB_CI_SERVER_URL=http://kube-master
Next get password:
sudo docker exec -it gitlab grep 'Password:' /etc/gitlab/initial_root_password
Will get something like:
root
<password>
Open http://kube-master:80 in browser, log in using root and
Then create a group Test Rust with url test-rust
After that create Project Hyper 1 with url hyper-1
Add instance runner and copy its registration token
Worker1
export GITLAB_CI_SERVER_URL=http://kube-master
sudo docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
sudo docker run --rm -it -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register
While installing:
- GitLab URL = http://192.168.217.155/
- paste the registration token of the instance runner
- execution shell = docker
- default Docker image = docker:dind
sudo gedit /srv/gitlab-runner/config/config.toml
Add to [[runners]]:
clone_url = "http://kube-master/"
Then add to [runners.docker]:
hostname = "http://kube-master/"
privileged = true
image = docker:dind
extra_hosts = ["kube-master:192.168.217.155"]
Then do:
sudo docker restart gitlab-runner
First, set up a sample Rust project (see next section)
Then in GitLab open Settings->CI->Edit and add .gitlab-ci.yaml file (copy its contents from this repo).
The pipeline should automatically run and rerun every time the sample project repo is updated.
To update the sample project on Kubernetes, you need to run:
kubectl rollout restart deployment rust-hyper-depl
Master
sudo apt install -y build-essential
sudo curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
Then you need to get the Rust project from https://github.com/LumaRay/test-simple-web-server/tree/master/test-rust-hyper
Don't forget .gitignore
cd ~/test-rust-hyper
cargo build --release
Copy test9.dockerfile to the application folder (copy its contents from this repo).
sudo docker build --pull --rm -f "test9.dockerfile" -t testrusthyper:latest "."
sudo docker tag testrusthyper:latest kube-master:6000/testrusthyper
sudo docker push kube-master:6000/testrusthyper
sudo docker pull kube-master:6000/testrusthyper
sudo apt-get install curl
curl http://kube-master:6000/v2/_catalog
curl -X GET kube-master:6000/v2/testrusthyper/tags/list
curl -X GET kube-master:6000/v2/testrusthyper/manifests/latest
git config --global user.name "Administrator"
git config --global user.email "admin@example.com"
git init
git remote add origin http://kube-master/test-rust/hyper-1.git
git add .
git commit -m "Initial commit"
git push -u origin master
Copy rusthyper_kubedepl.yaml from this repo
kubectl apply -f ./rusthyper_kubedepl.yaml
To remove the deployment:
kubectl delete -f ./rusthyper_kubedepl.yaml
Different ways to run the project on Kubernetes:
kubectl run rust-hyper --image=testrusthyper:v1 --image-pull-policy=IfNotPresent
kubectl run rust-hyper --image=kube-master:5000/testrusthyper --image-pull-policy=IfNotPresent --port 30001
Check application logs:
kubectl describe pods rust-hyper
kubectl logs kubectl logs rust-hyper-depl-787b6d9c97-fxhqm