ahmednoureldeen / sentinel-dashboard

This repository contains a simple metrics dashboard to apply Observability course content from Udacity Cloud Native Applications Architecture Nano Degree program

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sentinel Icon Sentinel Dashboard

Overview

Welcome to sentinel-dashboard!

sentinel-dashboard is a simple metrics dashboard used to apply Observability course content from Udacity Cloud Native Applications Architecture Nano Degree program.

You are given a simple Python application written with Flask and you need to apply basic SLOs and SLIs to achieve observability by creating dashboards that use multiple graphs to monitor our sample application that is deployed on a Kubernetes cluster.

Sentinel Icon

What is Observability?

    Observability is described as the ability of a business to gain valuable insights about the internal state or condition of a system just by analyzing data from its external outputs. If a system is said to be highly observable then it means that businesses can promptly analyze the root cause of an identified performance issue, without any need for testing or coding.

    In DevOps, observability is referred to the software tools and methodologies that help Dev and Ops teams to log, collect, correlate, and analyze massive amounts of performance data from a distributed application and glean real-time insights. This empowers teams to effectively monitor, revamp, and enhance the application to deliver a better customer experience.

Technologies

  • Prometheus: Monitoring tool.
  • Grafana: Visualization tool.
  • Jaeger: Tracing tool.
  • Flask: Python webserver.
  • Vagrant: Virtual machines management tool.
  • VirtualBox: Hypervisor allowing you to run multiple operating systems.
  • K3s: Lightweight distribution of K8s to easily develop against a local cluster.
  • Ingress NGINX: An application that runs in a cluster and configures an HTTP load balancer according to Ingress resources.

Getting Started

1. Prerequisites

We will be installing the tools that we'll need to use for getting our environment set up properly.

  1. Set up kubectl
  2. Install VirtualBox with at least version 6.0.x
  3. Install Vagrant with at least version 2.0.x
  4. Install OpenSSH
  5. Install sshpass

2. Environment Setup

To run the application, you will need a K8s cluster running locally and to interface with it via kubectl. We will be using Vagrant with VirtualBox to run K3s.

Initialize K3s

In this project's root directory, run:

vagrant up

Note:

  • Don't run this command until you read this section
  • The environment setup can take up to 20 minutes depending on your network bandwidth so be patient. Grab a coffee or something. If the installation fails run vagrant destroy and rerun vagrant up again but there is a slim chance you might need to do this, this setup has been tested numerous of times. Remember good things come to those who wait patiently = )
  • You can run vagrant suspend to conserve some of your system's resources and vagrant resume when you want to bring our resources back up. Some useful vagrant commands can be found in this cheatsheet.

The previous command will leverage VirtualBox to load an openSUSE OS and provision the following for you:

  • k3s v1.25.7+k3s1 kubernetes cluster.
  • ingress-nginx Helm chart installed in ingress-nginx namespace.
  • jetstack/cert-manager v1.9.0 Helm chart installed in cert-manager namespace.
  • prometheus-community/kube-prometheus-stack Helm chart installed in monitoring namespace.
  • jaeger-operator v1.34.1 installed in observability namespace.
  • jaeger-all-in-one instance installed in default namespace.
  • hotrod application to discover Jaeger capabilities and test around.

Additionally, the setup will out-of-the-box configure the following for you:

  • Expose:
    • Grafana Server on your local machine at http://localhost:30000.
    • Prometheus Server on your local machine at http://localhost:30001.
    • Jaeger UI on your local machine at http://localhost:30002.
    • hotrod app UI on your local machine at http://localhost:30003.

📝 Note: the installation will expose ports 30000 to 30010 where you can easily expose any service you want using NodePort service and access it locally, you can conifgure the range you want from you Vagrantfile.

  • Automatically configure jaeger-all-in-one instance located in default namespace as a datasource in Grafana Server.

  • Automatic scraping of:

    • jaeger-operator metrics located in observability namespace.
    • jaeger-all-in-one instance metrics located in default namespace.
  • Create Jaeger-all-in-one / Overview dashboard automatically to provide observability for your Jaeger instance which can be further customized.

  • Configure local access to your provisioned k3s cluster, have no worries the setup will backup your kubeconfig file if exists and add it to your HOME directory with the current timestamp.

    Warning: This requires for you to have Linux or MacOS machine with kubectl installed locally otherwise please comment lines 109-112 in Vagrantfile before run vagrant up.

    After running vagrant up, you can use scripts/copy_kubeconfig.sh script independently to install /etc/rancher/k3s/k3s.yaml file onto your local machine.

Warning: You need to run the script from the project's root directory where Vagrantfile resides otherwise, it will fail.

Execute the following:

$ bash scripts/copy_kubeconfig.sh

[INFO] connecting to vagrant@127.0.0.1:2222..
[INFO] connection success..
[WARNING] /Users/shehabeldeen/.kube/config already exists
[INFO] backing up /Users/shehabeldeen/.kube/config to /Users/shehabeldeen/config.backup.1678907724..
[INFO] copying k3s kubeconfig to /Users/shehabeldeen/.kube/config
[INFO] you can now access k3s cluster locally, run:
    $ kubectl version

📝 Note:

  • copy_kubbeconfig.sh accepts one argument; config_path which is the destination path of the k3s.yaml to be installed in. By default it is equal to "${HOME}/.kube/config"
  • copy_kubbeconfig.sh needs 4 environment variables:
    • SSH_USER: remote user accessed by vagrant ssh, by default is, vagrant.
    • SSH_USER_PASS: remote user password, by default is vagrant.
    • SSH_PORT: ssh port, by default is 2222 which is forwarded from host machine to guest machine at 22.
    • SSH_HOST: ssh server hostname, by default is localhost

3. Validate Installation

As mentioned in the previous section, the installation can take up to 20 minutes and these are some logs that you can validate your installation with from vagrant up command:

Prometheus & Grafana successful installation messages Prometheus & Grafana successfully installed

Hotrod Application & Jaeger instance successful installation messages Hotrod Application & Jaeger successfully installed

local kubectl access successful installation message Local kubectl access successfully granted


3.1 Workloads Check

Use kubectl to check the workloads, you should be able to find the following:

Local kubectl, check workloads

Local kubectl, check workloads Notice how we can access our k3s cluster locally without having to ssh into the virtual machine

3.2 Functionality Check

Now go to http://localhost:30000 on your local browser and you should see Grafana Login Page:

Grafana Login Page Enter the credentials shown in the logs

Grafana Home Page Go to your datasources page

Grafana Datasources Page Jaeger datasource has been automatically configured as a datasource

Jaeger Datasource Configuration Page Connection to Jaeger instance is successful


Go to http://localhost:30001 and you should see Prometheus Server Home Page:

Prometheus Home Page Check Prometheus current targets

Prometheus Targets Page Prometheus has been automatically configured to scrape data from jaeger-operator and jaeger-all-in-one


Go to http://localhost:30003 and you should see Hotrod Application:

Hotrod Application Page hotrod app is composed of multiple services running in parallel; frontend, backend, customer and some more. Click on any customer to dispatch a driver which will initiate a trace

Hotrod Application Page, dispatch driver A jaeger trace has been triggered. Click on link

Jaeger Query UI, searching for driver The link will redirect you to localhost:16686 which is jaeger-query actual port but remember, we have it exposed on 300002 on our local machine, so change the port only. Now you can see the trace for the dispatch initiated from the frontend service. Feel free to look around

Jaeger Query UI, span details This is what the span looks like. Return to hotrod app and trigger a lot of random simultaneous dispatches

Hotrod Application Page, dispatch many drivers Click as many times as you can to collect some data. Now go back to Grafana at http://localhost:30000 and hit the dashboards


The setup has provided a Jaeger-all-in-one / Overview dashboard to give us insights on how Jaeger is actually performing:

Grafana Jaeger-all-in-one Dashboard

Grafana Jaeger-all-in-one Dashboard

Grafana Jaeger-all-in-one Dashboard

Grafana Jaeger-all-in-one Dashboard

Congratulations! you have provisioned the infrastructure succesfully. Feel free to play around. Now you are ready to install the Python application, but first you need to remove the hotrod application alongside jaeger-all-in-one instance or rmeove hotrod only as the Python application will use the jaeger instance in the default namespace.

To remove hotrod application run the following:

$ kubectl delete svc hotrod hotrod-external; kubectl delete deployments hotrod

service "hotrod" deleted
service "hotrod-external" deleted
deployment.apps "hotrod" deleted

⚔️ Developed By

LinkedIn

Shehab El-Deen Alalkamy

📖 Author

Shehab El-Deen Alalkamy

About

This repository contains a simple metrics dashboard to apply Observability course content from Udacity Cloud Native Applications Architecture Nano Degree program


Languages

Language:Shell 74.9%Language:Python 18.4%Language:HTML 3.2%Language:Dockerfile 3.1%Language:JavaScript 0.4%Language:CSS 0.0%