Monitoring OpenShift PODS with Ansible and Zabbix Sender
This smart-start describes the creation an OpenShift project to monitoring PODS using Zabbix Sender and Ansible
Prerequisites
- OpenShift or generic kubernetes cluster
- OpenShift client (oc) or kubectl
- OpenJDK 11 or Graalvm-11
- Zabbix Server
- podman or docker
Summary
Creating OpenShift Projects
In this session we'll create an OpenShift projects to deploy apis example and CronJobs to get container metrics and send to Zabbix Server.
Monitoring API Project
In this project we'll deploy ansible-agent4ocp to monitoring APIs PODs.
$ oc new-project apis-monitoring
Now using project "apis-monitoring" on server "omitted".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
to build a new example application in Ruby.
Build ansible-agent4ocp
This is the Dockerfile for ansible-agent4ocp
#base image
FROM quay.io/centos/centos:stream8
USER root
#workdir folder
ENV HOME=/opt/scripts
WORKDIR ${HOME}
#CentoOS Stream extras repositories
RUN yum install epel-release -y && \
#updating SO
yum update -y && \
yum install ansible.noarch -y && \
#install ansible
yum clean all && \
#download and install OpenShift client
curl https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/linux/oc.tar.gz --output /tmp/oc.tar.gz && \
tar xvzf /tmp/oc.tar.gz && \
cp oc /usr/local/bin && \
rm oc kubectl && \
rm /tmp/oc.tar.gz && \
#Granting permissions to folders and files
mkdir -pv ${HOME} && \
mkdir -pv ${HOME}/.ansible/tmp && \
mkdir -pv ${HOME}/.kube/ && \
mkdir -pv ${HOME}/playbooks && \
chown -R 1001:root ${HOME} && \
chgrp -R 0 ${HOME} && \
chmod -R g+rw ${HOME}
#folder to save playbooks
VOLUME ${HOME}/playbooks
USER 1001
#ansible example file
ADD example.yml ${HOME}/example.yml
$ podman build ansible-agent4ocp/. --tag ansible-agent4ocp
STEP 1/8: FROM quay.io/centos/centos:stream8
STEP 2/8: USER root
omitted
3dae1ac7584c97e57296f674b42ac1886a88c180aec715860c8118712a81d683
Testing ansible-agent4ocp
$ podman run -it ansible-agent4ocp:latest ansible-playbook example.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [This is a ansible script hello-world] **************************************************************************************************************************************************************************************************
TASK [Hello Ansible] *************************************************************************************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Login in OCP public registry
$ podman login -u $(oc whoami) -p $(oc whoami -t) <ocp-public-registry>
Login Succeeded!
Pushing image to OCP
- Tag image
$ podman tag localhost/ansible-agent4ocp <ocp-public-registry>/apis-monitoring/ansible-agent4ocp
- Push Image
$ podman push <ocp-public-registry>/apis-monitoring/ansible-agent4ocp
Getting image source signatures
Copying blob e7a4bda8f16d done
omitted
Storing signatures
APIs project
In this project we'll deploy example APIs.
$ oc new-project api
Now using project "api" on server "omitted".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
to build a new example application in Ruby.
Deploying custumer-api
FROM quay.io/centos/centos:stream8
USER root
WORKDIR /work/
RUN chown 1001 /work && \
chmod "g+rwX" /work && \
chown 1001:root /work && \
#SO Update
yum update -y && \
#Zabbix Sender Install
curl https://repo.zabbix.com/zabbix/3.0/rhel/7/x86_64/zabbix-sender-3.0.9-1.el7.x86_64.rpm --output /tmp/zabbix-sender-3.0.9-1.el7.x86_64.rpm && \
yum install /tmp/zabbix-sender-3.0.9-1.el7.x86_64.rpm -y && \
yum clean all
COPY --chown=1001:root target/*-runner /work/application
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
- Deploying application in OCP
$ oc new-app https://github.com/pedroarraes/ocp-zabbix-monitoring.git --context-dir=/customer-api --strategy=docker --name=customer-api
--> Found Docker image dc28896 (5 weeks old) from quay.io for "quay.io/centos/centos:stream8"
CentOS Stream 8
---------------
The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.
omitted
Run 'oc status' to view your app.
- Exposing application
$ oc expose svc/customer-api
route.route.openshift.io/customer-api exposed
- Testing api
$ curl $(oc get route customer-api | awk 'FNR==2{print $2}')/hello
Hello RESTEasy
- Testing Zabbix Sender
$ oc get pods
NAME READY STATUS RESTARTS AGE
customer-api-1-build 0/1 Completed 0 12m
customer-api-1-c2rrx 1/1 Running 0 10m
customer-api-1-deploy 0/1 Completed 0 10m
$ oc rsh customer-api-1-c2rrx zabbix_sender
zabbix_sender [23]: either '-c' or '-z' option must be specified
usage:
omitted
command terminated with exit code 1
Deploying inventory-api
FROM quay.io/centos/centos:stream8
USER root
WORKDIR /work/
RUN chown 1001 /work && \
chmod "g+rwX" /work && \
chown 1001:root /work && \
#SO Update
yum update -y && \
#Zabbix Sender Install
curl https://repo.zabbix.com/zabbix/3.0/rhel/7/x86_64/zabbix-sender-3.0.9-1.el7.x86_64.rpm --output /tmp/zabbix-sender-3.0.9-1.el7.x86_64.rpm && \
yum install /tmp/zabbix-sender-3.0.9-1.el7.x86_64.rpm -y && \
yum clean all
COPY --chown=1001:root target/*-runner /work/application
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
$ oc new-app https://github.com/pedroarraes/ocp-zabbix-monitoring.git --context-dir=/inventory-api --strategy=docker --name=inventory-api
--> Found Docker image dc28896 (5 weeks old) from quay.io for "quay.io/centos/centos:stream8"
CentOS Stream 8
---------------
The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.
omitted
Run 'oc status' to view your app.
- Exposing application
$ oc expose svc/inventory-api
route.route.openshift.io/inventory-api exposed
- Testing api
$ curl $(oc get route inventory-api | awk 'FNR==2{print $2}')/hello
Hello RESTEasy
- Testing Zabbix Sender
$ oc get pods
customer-api-1-build 0/1 Completed 0 35m
customer-api-1-c2rrx 1/1 Running 0 33m
customer-api-1-deploy 0/1 Completed 0 33m
inventory-api-1-deploy 0/1 Completed 0 2m19s
inventory-api-1-fswtb 1/1 Running 0 2m15s
inventory-api-3-build 0/1 Completed 0 4m46s
$ oc rsh inventory-api-1-fswtb zabbix_sender
zabbix_sender [23]: either '-c' or '-z' option must be specified
usage:
omitted
command terminated with exit code 1
Configuring service account
In this session we will create e configure service account permission to access OpenShift PODS, get metrics and send to Zabbix Server using Zabbix Sender.
- Creating service account
$ oc create sa sa-apis-monitoring -n apis-monitoring
serviceaccount/sa-apis-monitoring created
- Creating custom role binding to get and exec pods
$ oc create role podview --verb=get,list,watch --resource=pods -n api
role.rbac.authorization.k8s.io/podview created
$ oc create role podexec --verb=create --resource=pods/exec -n api
role.rbac.authorization.k8s.io/podexec created
$ oc create role projectview --verb=get,list --resource=project -n api
role.rbac.authorization.k8s.io/projectview created
- Adding policy to service account
$ oc adm policy add-role-to-user podview system:serviceaccount:apis-monitoring:sa-apis-monitoring --role-namespace=api -n api
role "podview" added: "system:serviceaccount:apis-monitoring:sa-apis-monitoring"
$ oc adm policy add-role-to-user podexec system:serviceaccount:apis-monitoring:sa-apis-monitoring --role-namespace=api -n api
role "podexec" added: "system:serviceaccount:apis-monitoring:sa-apis-monitoring"
$ oc adm policy add-role-to-user projectview system:serviceaccount:apis-monitoring:sa-apis-monitoring --role-namespace=api -n api
role "projectview" added: "system:serviceaccount:apis-monitoring:sa-apis-monitoring"
Scheduling OpenShift CronJobs
In this session we'll scheduler OpenShift CronJobs to get metrics PODS and send to Zabbix Server
Getting Service Account Token
This command is used to get secret value and will be used in Ansible Script.
$ oc describe secret $(oc describe sa sa-apis-monitoring -n apis-monitoring | awk '{if(NR==8) print $2}') -n apis-monitoring | grep token | awk '{if(NR==3) print $2'}
omitted
Creating Ansible File as Config MAP to get free memory PODS
- name: Get POD free memory
hosts: localhost
tasks:
- name: OCP Autentication
#Use the script at last session to take token
shell: oc login --token=<omitted> --server=<omitted>
- name: Get PODS
hosts: localhost
tasks:
- name: Go to API project
shell: oc project api
- name: Get PODs
shell: oc get pods -n api | grep Running | awk {'print $1'}
register: pods_list
- name: Get free memory
shell: oc rsh {{ item }} zabbix_sender -vv -z <zabbix_server_host> -s <zabbix_registered_api> -k free_memory -o $(oc rsh {{ item }} free | awk '{if(NR==2) print $4}')
with_items: "{{ pods_list.stdout_lines }}"
$ oc create configmap free-memory --from-file=ansible-scripts/free-memory.yml -n apis-monitoring
configmap/free-memory created
Creating Ansible File as Config MAP to get used memory PODS
- name: Get POD used memory
hosts: localhost
tasks:
- name: OCP Autentication
#Use the script at last session to take token
shell: oc login --token=<omitted> --server=<omitted>
- name: Get PODS
hosts: localhost
tasks:
- name: Go to API project
shell: oc project api
- name: Get PODs
shell: oc get pods -n api | grep Running | awk {'print $1'}
register: pods_list
- name: Get used memory
shell: oc rsh {{ item }} zabbix_sender -vv -z <zabbix_server_host> -s <zabbix_registered_api> -k free_memory -o $(oc rsh {{ item }} free | awk '{if(NR==2) print $3}')
with_items: "{{ pods_list.stdout_lines }}"
$ oc create configmap used-memory --from-file=ansible-scripts/used-memory.yml -n apis-monitoring
configmap/used-memory created
Configuring CronJob for custumer-api
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: free-memory
spec:
schedule: '*/5 * * * *'
concurrencyPolicy: Allow
suspend: false
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
volumes:
- name: free-memory
configMap:
name: free-memory
defaultMode: 420
containers:
- name: ansible-agent4ocp
image: >-
image-registry.openshift-image-registry.svc:5000/apis-monitoring/ansible-agent4ocp
args:
- /bin/sh
- '-c'
- ansible-playbook playbooks/free-memory.yml
resources: {}
volumeMounts:
- name: free-memory
mountPath: /opt/scripts/playbooks
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: OnFailure
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
cronjob.batch/free-memory created
Configuring CronJob for inventory-api
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: used-memory
spec:
schedule: '*/5 * * * *'
concurrencyPolicy: Allow
suspend: false
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
volumes:
- name: used-memory
configMap:
name: used-memory
defaultMode: 420
containers:
- name: ansible-agent4ocp
image: >-
image-registry.openshift-image-registry.svc:5000/apis-monitoring/ansible-agent4ocp
args:
- /bin/sh
- '-c'
- ansible-playbook playbooks/used-memory.yml
resources: {}
volumeMounts:
- name: used-memory
mountPath: /opt/scripts/playbooks
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: OnFailure
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
$ oc create -f cronjobs/used-memory-jobs.yml -n apis-monitoring
cronjob.batch/used-memory created