zs1621 / k8sgpt

Giving Kubernetes Superpowers to everyone

Home Page:http://k8sgpt.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Text changing depending on mode. Light: 'So light!' Dark: 'So dark!'

GitHub code size in bytes GitHub Workflow Status GitHub release (latest by date) OpenSSF Best Practices Link to documentation

k8sgpt is a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.

It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI.

K8sGPT - K8sGPT gives Kubernetes Superpowers to everyone | Product Hunt

CLI Installation

Linux/Mac via brew

brew tap k8sgpt-ai/k8sgpt
brew install k8sgpt
RPM-based installation (RedHat/CentOS/Fedora)

32 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.9/k8sgpt_386.rpm
sudo rpm -ivh k8sgpt_386.rpm

64 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.9/k8sgpt_amd64.rpm
sudo rpm -ivh -i k8sgpt_amd64.rpm
DEB-based installation (Ubuntu/Debian)

32 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.9/k8sgpt_386.deb
sudo dpkg -i k8sgpt_386.deb

64 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.9/k8sgpt_amd64.deb
sudo dpkg -i k8sgpt_amd64.deb
APK-based installation (Alpine)

32 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.9/k8sgpt_386.apk
apk add k8sgpt_386.apk

64 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.2.9/k8sgpt_amd64.apk
apk add k8sgpt_amd64.apk
x
Failing Installation on WSL or Linux (missing gcc) When installing Homebrew on WSL or Linux, you may encounter the following error:
==> Installing k8sgpt from k8sgpt-ai/k8sgpt Error: The following formula cannot be installed from a bottle and must be
built from the source. k8sgpt Install Clang or run brew install gcc.

If you install gcc as suggested, the problem will persist. Therefore, you need to install the build-essential package.

   sudo apt-get update
   sudo apt-get install build-essential

Windows

  • Download the latest Windows binaries of k8sgpt from the Release tab based on your system architecture.
  • Extract the downloaded package to your desired location. Configure the system path variable with the binary location

Operator Installation

To install within a Kubernetes cluster please use our k8sgpt-operator with installation instructions available here

This mode of operation is ideal for continuous monitoring of your cluster and can integrate with your existing monitoring such as Prometheus and Alertmanager.

Quick Start

  • Currently the default AI provider is OpenAI, you will need to generate an API key from OpenAI
    • You can do this by running k8sgpt generate to open a browser link to generate it
  • Run k8sgpt auth new to set it in k8sgpt.
    • You can provide the password directly using the --password flag.
  • Run k8sgpt filters to manage the active filters used by the analyzer. By default, all filters are executed during analysis.
  • Run k8sgpt analyze to run a scan.
  • And use k8sgpt analyze --explain to get a more detailed explanation of the issues.

Analyzers

K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but you will be able to write your own analyzers.

Built in analyzers

Enabled by default

  • podAnalyzer
  • pvcAnalyzer
  • rsAnalyzer
  • serviceAnalyzer
  • eventAnalyzer
  • ingressAnalyzer
  • statefulSetAnalyzer
  • deploymentAnalyzer
  • cronJobAnalyzer
  • nodeAnalyzer

Optional

  • hpaAnalyzer
  • pdbAnalyzer
  • networkPolicyAnalyzer

Examples

Run a scan with the default analyzers

k8sgpt generate
k8sgpt auth new
k8sgpt analyze --explain

Filter on resource

k8sgpt analyze --explain --filter=Service

Filter by namespace

k8sgpt analyze --explain --filter=Pod --namespace=default

Output to JSON

k8sgpt analyze --explain --filter=Service --output=json

Anonymize during explain

k8sgpt analyze --explain --filter=Service --output=json --anonymize

Using filters

List filters

k8sgpt filters list

Add default filters

k8sgpt filters add [filter(s)]

Examples :

  • Simple filter : k8sgpt filters add Service
  • Multiple filters : k8sgpt filters add Ingress,Pod

Remove default filters

k8sgpt filters remove [filter(s)]

Examples :

  • Simple filter : k8sgpt filters remove Service
  • Multiple filters : k8sgpt filters remove Ingress,Pod

Additional commands

List configured backends

k8sgpt auth list

List integrations

k8sgpt integrations list

Activate integrations

k8sgpt integrations activate [integration(s)]

Use integration

k8sgpt analyze --filter=[integration(s)]

Deactivate integrations

k8sgpt integrations deactivate [integration(s)]

Serve mode

k8sgpt serve

Analysis with serve mode

curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"

Additional AI providers

Azure OpenAI

Prerequisites: an Azure OpenAI deployment is needed, please visit MS official documentation to create your own.

To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant "https://your Azure OpenAI Endpoint", the api key to access your deployment, the deployment name of your model and the model name itself.

Run k8sgpt

To run k8sgpt, run k8sgpt auth with the azureopenai backend:

k8sgpt auth --backend azureopenai --baseurl https://<your Azure OpenAI endpoint> --engine <deployment_name> --model <model_name>

Lastly, enter your Azure API key, after the prompt.

Now you are ready to analyze with the azure openai backend:

k8sgpt analyze --explain --backend azureopenai

Running local models

To run local models, it is possible to use OpenAI compatible APIs, for instance LocalAI which uses llama.cpp and ggml to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.

To run local inference, you need to download the models first, for instance you can find ggml compatible models in huggingface.com (for example vicuna, alpaca and koala).

Start the API server

To start the API server, follow the instruction in LocalAI.

Run k8sgpt

To run k8sgpt, run k8sgpt auth new with the localai backend:

k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1

Now you can analyze with the localai backend:

k8sgpt analyze --explain --backend localai

How does anonymization work?

With this option, the data is anonymized before being sent to the AI Backend. During the analysis execution, k8sgpt retrieves sensitive data (Kubernetes object names, labels, etc.). This data is masked when sent to the AI backend and replaced by a key that can be used to de-anonymize the data when the solution is returned to the user.

  1. Error reported during analysis:
Error: HorizontalPodAutoscaler uses StatefulSet/fake-deployment as ScaleTargetRef which does not exist.
  1. Payload sent to the AI backend:
Error: HorizontalPodAutoscaler uses StatefulSet/tGLcCRcHa1Ce5Rs as ScaleTargetRef which does not exist.
  1. Payload returned by the AI:
The Kubernetes system is trying to scale a StatefulSet named tGLcCRcHa1Ce5Rs using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.
  1. Payload returned to the user:
The Kubernetes system is trying to scale a StatefulSet named fake-deployment using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.

Anonymization does not currently apply to events.

Configuration

`k8sgpt` stores config data in the `$XDG_CONFIG_HOME/k8sgpt/k8sgpt.yaml` file. The data is stored in plain text, including your OpenAI key.

Config file locations:

OS Path
MacOS ~/Library/Application Support/k8sgpt/k8sgpt.yaml
Linux ~/.config/k8sgpt/k8sgpt.yaml
Windows %LOCALAPPDATA%/k8sgpt/k8sgpt.yaml

Contributing

Please read our contributing guide.

Community

Find us on Slack

About

Giving Kubernetes Superpowers to everyone

http://k8sgpt.ai

License:Apache License 2.0


Languages

Language:Go 96.0%Language:Makefile 2.6%Language:Smarty 0.8%Language:Dockerfile 0.7%