SkaleSafe empowers organizations to confidently navigate the complexity of their Kubernetes clusters by providing a powerful webapp that offers insightful and customized metric visualization. Our focus on crucial scaling metrics, comprehensive cluster health metrics, and actionable alerts metrics sets us apart, enabling our clients to make informed decisions and drive their businesses forward. We strive to simplify the monitoring and management of Kubernetes clusters, making it accessible and effortless for all.
-
Fork SkaleSafe's repository & then clone your forked repository using your GitHub handle
git clone https://github.com/your-github-handle/SkaleSafe.git
-
Install NPM packages
npm install
-
Create an
.env
file at the root of your cloned directory -
Connect your Mongo database in the .env file
PORT = 3000; MONGO_URI = 'YOUR MONGO URI STRING'; SALT_WORK_FACTOR= 10;
-
If your cluster is running you may start the app via the command
npm run app
-
Or if you'd like to utilize Electron's embedded Chromium/Node.JS combination, you may start the app via the command
npm run elec
Prometheus is a collection of pods intended to monitor your Kubernetes cluster. Please see their documentation for extensive details: Prometheus Docs.
-
All of the Prometheus configuration files mentioned in this section are created for you and hosted on GitHub. Clone this repo using the following command:
git clone https://github.com/daniel-doody/setup-prometheus-kubernetes.git
Now we need to create a cluster namespace for all of our monitoring components. We create a dedicated namespace, because we don't want all of our monitoring pods floating around in the default namespace.
-
Execute the following command to create a new namespace: monitoring.
kubectl create namespace monitoring
-
Navigate to our cloned folder with the Prometheus files, apply the 'cluster-role.yaml' file to create a Cluster Role with the following RBAC policies: (get, watch, read).
kubectl apply -f cluster-role.yaml
-
Next, create a Config Map by applying 'config-map.yaml' to externalize the Prometheus configurations
kubectl apply -f config-map.yaml
-
Create the Prometheus Deployment by applying 'prom-deploy.yaml'
kubectl apply -f prom-deploy.yaml
-
Expose Prometheus using Ingress by applying the 'ingress-controller.yaml' file.
kubectl apply -f cluster-role.yaml
This exposes the ingress object on port 8080. To change the port, just edit the ingress file!
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: prometheus-ui namespace: monitoring annotations: kubernetes.io/ingress.class: nginx spec: rules: # Use the host you used in your kubernetes Ingress Configurations - host: prometheus.example.com http: paths: - backend: serviceName: prometheus-service servicePort: 8080
In our previous step, we set up Prometheus to monitor our cluster. Next, we will add Grafana for real-time cluster metric visualization.
For the complete list of setup instructions and customizations, please see: Grafana Docs.
-
All Grafana config files in this section are created for you and hosted on GitHub. Clone this repo using the following command:
git clone https://github.com/daniel-doody/grafana-setup-kubernetes.git
-
Create the Grafana / Prometheus data source ConfigMap:
Note: This is configured for Prometheus. If you have other data sources such as DataDog, you can add them with different YAMLs under the data section. Inside of your cloned Grafana folder, apply the 'graf-config.yaml'
kubectl apply -f graf-config.yaml
-
Apply the Grafana service file to expose the Grafana port.
kubectl apply -f graf-service.yaml
This will expose Grafana on NodePort 32000:
apiVersion: v1 kind: Service metadata: name: grafana namespace: monitoring annotations: prometheus.io/scrape: 'true' prometheus.io/port: '3000' spec: selector: app: grafana type: NodePort ports: - port: 3000 targetPort: 3000 nodePort: 32000
Now you should be able to access the Grafana dashboard using any node IP on your cluster at port 32000. Make sure the port is allowed in the firewall to be accessed from your local machine.
Use the following default username and password to log in:
User: admin Pass: admin
Once you log in with default credentials, it will prompt you to change the default password.
Now that Prometheus and Grafana is all set-up, we will add Kubeview for real-time cluster visualization. Kubeview will provide an overview of your cluster objects in icons.
For the complete list of setup instructions and customizations, please see: Kubeview Docs.
- All Kubeview config files in this section are created for you and hosted on GitHub. Clone this repo using the following command:
git clone https://github.com/sxhanx/setup-kubeview.git
- Inside of your cloned Kubeview folder, apply the ‘service.yaml'
kubectl apply -f service.yaml
- Apply the Kubeview deployment file
kubectl apply -f deployment.yaml
If you are running a local cluster using MiniKube, please use the Electron version of our application instead of our web application. The web application (SkaleSafe.com) only works with cloud-hosted clusters, or local clusters with an SSL-certificate installed. For security reasons, ChromeOS will only show cluster metrics with an active SSL certificate configured in the cluster.
To run SkaleSafe on your local machine using Electron, follow this guide: ELECTRON.md
If you are new to Kubernetes, we welcome you to use our app as a learning tool. Follow this quick-start guide to install MiniKube on your machine, and spin up your first cluster in no-time! After you have successfully set up the cluster, please continue with installing Prometheus, Grafana, and KubeView
Upon contributing, you agree that your contributions will be licensed under its MIT License.
Please feel free to reach out to us if you would like to contribute or if you have any questions or concerns!
If you like this project, please give it a ⭐️!