activescott / home-infra

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

home-infra

The intent here is to maintain various apps and configurations that I run at home. Below is more detail about each.

Usage

Each subdirectory of k8s/apps (well most of them) represents an app or some infrastructure component in support of the apps. The name of the subdirectory segment is the app-name below. Each script/command is as follows:

  • ./k8s/scripts/deploy.sh <app-name>: Deploy the app. For example ./k8s/scripts/deploy.sh plex will deploy the plex app to the cluster.
  • ./k8s/scripts/preview.sh <app-name>: Spits out the final kubernetes yaml after it is resolved from kustomize. It will also reveal any client-detectable errors in the yaml. Does not deploy anything to the cluster.
  • ./k8s/scripts/clean.sh <app-name>: Delete the app from the cluster. It immediately deletes the app. So be careful!

k8s/apps

These are my apps running at home in Kubernetes. I am currently using k3s on either debian or TrueNAS (playing with both).

k8s/apps/home-assistant

This is my Home Assistant + ZWave JS zwavejs2mqtt Server implementation running on docker. See k8s/apps/home-assistant/README.md

k8s/apps/plex

A Plex Media Server on kubernetes.

k8s/apps/photoprism

Photoprism is setup for photos.scott.willeke.com and photos.oksana.willeke.com

k8s/apps/transmission

A Transmission Bittorrent server to download and seed torrents.

k8s/apps/unifi

A Ubiquity/Unifi Controller app setup running.

Infrastructure/Supporting Apps

The apps below here are installed to support the other apps in the cluster.

k8s/apps/cert-manager

This is a cert manager instance that provisions certificates for _.scott.willeke.com, _.oksana.willeke.com, and *.activescott.com.

k8s/apps/k8tz

Sucks to see different times in logs of apps, k8tz is provisioned to ensure that all the pods/containers are provisioned with the same timezone as the host.

How it Works

Kustomize

I use Kustomize for packaging my kubernetes apps. Where with helm you create a pre-packaged component with pre-defined set of extensibility points that can be customized (i.e. in values.yaml), Kustomize references an existing Kubernetes "app" (think of these as an "example app") and Kustomize is then used to customize the the app for your needs with patches to add, remove, or change values in the kubernetes resources. It can customize anything in the referenced app's Kubernetes resources. By convention, the "example app" is usually defined in a base folder as a standard set of kubernetes resources and each patched version is usually a subfolder of the overlays folder. I'd say Helm provides better encapsulation if you have say a lot of dependents, but kustomize has less formality and learning and is a bit closer to simple/plain kubernetes.

The best overview of Kustomize is their readme: https://github.com/kubernetes-sigs/kustomize

A good example of using overlays is https://github.com/kubernetes-sigs/kustomize/blob/master/examples/springboot/

Reference for Kustomization files: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/

Good detail on different patches at https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/

K8s Labels:

I put a couple labels on various Kubernetes resources to help ensure they're organized and understandable.

TLDR:

commonLabels:
  # use `app.activescott.com/name` to avoid conflicts with other people's resources using "app" label.
  app.activescott.com/name: app-name
  app.activescott.com/tenant: everyone # or scott or oksana, etc.

Notes to Self

Kubernetes Memory & Limits

Specify a memory limit on containers (spec.containers[].resources.limits.memory). The limit appears to correspond to Prometheus metric container_memory_usage_bytes. The container_memory_usage_bytes metric though includes cached (think filesystem cache) items that can be evicted under memory pressure (ref1, ref2 ref3).

The container_memory_working_set_bytes is what the OOMKiller watches though and that tends to be much lower. The limit specified in YAML is honored in two ways (this insight from ref2):

  1. If the pod's container_memory_usage_bytes gets hits the limit then then the pod/container (OS?) will reduce the cache memory to keep the pod under the limit as long as container_memory_working_set_bytes + container_memory_cache < limit.
  2. If the container_memory_working_set_bytes isn't brought under the limit, then OOMKiller will kill the pod/container.

Do Requests need to be specified?

Note: If you specify a limit for a resource, but do not specify any request, and no admission-time mechanism has applied a default request for that resource, then Kubernetes copies the limit you specified and uses it as the requested value for the resource. – https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Yes.

Pods without Limits Specified

To identify pods without memory resource limit specified:

$ kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(.spec.containers[].resources.limits.memory == null) | .metadata.name'

TODO:

About


Languages

Language:Shell 88.4%Language:JavaScript 10.0%Language:Dockerfile 1.6%