vember31 / home-ops

Applications @ home running on Kubernetes, maintained via GitOps using Flux

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

My Home Operations Repository 🏑

... managed via Kubernetes, Flux, and Renovate πŸ€–

Kubernetes   Status-Page   Plex

πŸ“– Overview

This is a mono repository for my home infrastructure and Kubernetes cluster. This follows Infrastructure as Code (IaC) and GitOps practices using Kubernetes, Flux and Renovate.

β›΅ Kubernetes

There is a template over at onedr0p/flux-cluster-template if you want to try and follow along with some of the practices in use here.

Installation

This cluster is k3s provisioned atop Ubuntu 22.04 VMs, which are hosted in Proxmox v8. This is a semi-hyper-converged cluster, where workloads and block storage share the same available resources on nodes. Two of the nodes also run OpenMediaVault, utilizing XFS & MergerFS as file systems, and serve NFS, SMB, S3 (via MinIO) and provide bulk file storage and Longhorn backups. These bulk storage drives are backed up via SnapRAID on a weekly schedule.

Core Components

  • cert-manager: manages SSL X509 certificates for all http-based services in the cluster
  • cloudnative-pg: high-availability PostgreSQL database built for Kubernetes
  • eraser: removes non-running images from all nodes in a cluster
  • external-dns: automatically syncs DNS records from cluster ingresses to a DNS provider (Cloudflare)
  • external-secrets: managed Kubernetes secrets using Gitlab CI/CD variables
  • kured: Kubernetes daemonset that performs safe automatic node reboots when the need to do so is indicated by the package management system of the underlying OS
  • kube-prometheus-stack: Prometheus, Grafana, Alertmanager & Thanos, bundled into one helm chart for high-availability observability & monitoring of the cluster.
  • longhorn: distributed block storage for Kubernetes volume persistence
  • metal-lb: internal Kubernetes load-balancing plugin
  • system-upgrade-controller: automated, rolling k3s version upgrades
  • traefik: ingress controller for Kubernetes, acting as a modern HTTP reverse proxy and load balancer

GitOps

Flux watches the cluster defined in the kubernetes folder (see Directories below) and makes the changes to the cluster based on the state of this Git repository.

Flux works by recursively searching the kubernetes/apps folder until it finds the most top-level kustomization.yaml per directory, and then applies all resources listed in it. That aforementioned kustomization.yaml will generally have a namespace resource and one or many Flux kustomizations (ks.yaml). Under the control of those Flux kustomizations there will be a HelmRelease or other resources related to the application which will be applied.

Renovate watches the entire repository looking for dependency updates. When updates are found, PRs are automatically created. Flux applies the changes to the cluster once PRs are merged.

Directories

This Git repository contains the following directories under Kubernetes.

πŸ“ kubernetes
β”œβ”€β”€ πŸ“ apps           # applications
β”œβ”€β”€ πŸ“ bootstrap      # bootstrap procedures
β”œβ”€β”€ πŸ“ flux           # core flux configuration
└── πŸ“ templates      # re-useable components

Flux Workflow

This is a high-level look how Flux deploys applications with dependencies. Below there are 3 example apps: postgres, lldap and authelia. postgres is the first app that needs to be running and healthy before lldap and authelia. Once postgres is healthy lldap will be deployed and after that is healthy authelia will be deployed.

graph TD;
  id1>Kustomization: cluster] -->|Creates| id2>Kustomization: cluster-apps];
  id2>Kustomization: cluster-apps] -->|Creates| id3>Kustomization: postgres];
  id2>Kustomization: cluster-apps] -->|Creates| id6>Kustomization: lldap]
  id2>Kustomization: cluster-apps] -->|Creates| id8>Kustomization: authelia]
  id2>Kustomization: cluster-apps] -->|Creates| id5>Kustomization: postgres-cluster]
  id3>Kustomization: postgres] -->|Creates| id4[HelmRelease: postgres];
  id5>Kustomization: postgres-cluster] -->|Depends on| id3>Kustomization: postgres];
  id5>Kustomization: postgres-cluster] -->|Creates| id10[Postgres Cluster];
  id6>Kustomization: lldap] -->|Creates| id7(HelmRelease: lldap);
  id6>Kustomization: lldap] -->|Depends on| id5>Kustomization: postgres-cluster];
  id8>Kustomization: authelia] -->|Creates| id9(HelmRelease: authelia);
  id8>Kustomization: authelia] -->|Depends on| id5>Kustomization: postgres-cluster];
  id9(HelmRelease: authelia) -->|Depends on| id7(HelmRelease: lldap);

🌐 DNS

I use a UDM Pro SE as the center of my network. DHCP leases point to Blocky as the primary DNS and PiHole as secondary. Blocky is hosted within the Kubernetes cluster as a daemonset for high availability across 4 nodes (5 VMs), and PiHole is hosted in an LXC container on one of the nodes. A spare Raspberry Pi is available for further redundancy.

Blocky resolves all local (*.local.${SECRET_DOMAIN}) DNS entries to Traefik (reverse proxy), which directs to the appropriate ingress. All forwarded DNS queries that leave the cluster are sent via DoT to Cloudflare.

πŸ”§ Hardware

Device Count OS Disk Size Data Disk Size Ram Operating System Purpose
HP Workstation Z620 1 1TB SSD 1TB SSD 96GB Ubuntu Kubernetes Server
Lenovo Workstation 1 250GB SSD 8TB HDD 8GB Ubuntu Kubernetes Server
Dell Optiplex Mini 1 1TB SSD 3x8TB (mergerfs,snapraid) 32GB Ubuntu Kubernetes Server
GMKTec Nuc @ i7-12650H 1 1TB SSD 1TB SSD 24GB Ubuntu Kubernetes Server
Unifi UDM Pro SE 1 - - - - 2.5Gb PoE Router
Unifi U6 AP Pro 4 - - - - Access Points
Unifi Switch 24 Pro 1 - - - - 1Gb PoE Switch
Unifi NVR 1 - 8TB HDD - - NVR
Unifi G4 Bullet 6 - - - - Security

🀝 Gratitude and Thanks

Thanks to all the people who donate their time to the Home Operations Discord community. Be sure to check out kubesearch.dev for ideas on how to deploy applications or get ideas on what you may deploy.

About

Applications @ home running on Kubernetes, maintained via GitOps using Flux


Languages

Language:YAML 99.6%Language:Shell 0.3%Language:Python 0.1%