irish1986 / irishlab

This is my homelab, there are many link this but this one is mine.

Home Page:https://example.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

irishlab

irishlab

... where I learn Architecture, Infrastructure, Networking, DevOps, and a few others things.

This is a mono repository for my homelab infrastructure and kubernetes cluster implementing Infrastructure as Code (IaC) and GitOps practices using tools like Kubernetes, Flux, Renovate and GitHub Actions.

Feel free to open a Github issue if you have any questions.

Overview


Architecture

Rack

I currently lack the physical space to deploy a full size server rack but it is a project of mine. At the moment, I am using two (2) wall mounted network rack which beside the obvious issue of not handling full size server are great. I also host two (2) Tripp Lite UPS, one for each rack, for power protection.

I have purchase a RPi4 UCTRONICS 1U Rack Mount that can fit up to six (6) Raspberry Pi 4 with Power-Over-Ethernet.

Hardware

I am mostly running consumer grade hardware and single board computers

Device Count OS Disk Size Data Disk Size Ram Purpose
2U Server 1 120GB SSD 2x1TB NVME Cache
4x1TB SSD ZFS
32GB NAS Server
Soon Commissioned
4U Server 1 2x120GB SSD 2x1TB NVME Cache
12x10TB HDD ZFS
128GB NAS Server
Soon Commissioned
4U Workstation 1 2x1TB NVME None 64GB Daily Driver
Gaming Workstation
Dell Micro 5060 7 120GB SSD 1x500GB NVME 32GB Proxmox and CEPH cluster
Dell Wyze 5060 1 64GB SSD None 32GB Proxmox test cluster
Jetson Nano 1 32GB SD 128GB USB 4GB Edge Node and CUDA Workload
Raspberry Pi4 1 32GB SD None 4GB KVM Node
Raspberry Pi4 1 32GB USB 128GB USB 4GB Edge Node and DNS Server
Raspberry Pi4 6 32GB USB 128GB USB 8GB Edge Node and K3S Server
Synology DS1812+ 1 None 8x10TB SHR-2 2GB NAS Server
Soon Decommissioned
Synology DS218J 1 None 2x4TB SHR-2 2GB NAS Server
Off-site Backup

Networking

I am running a TP-Link Omada medium stack.

KVM

I bought a PiKVM V3 a while back and I planning on deploying a TESMart 16 HDMI KVM Switch in a near future for baremetal KVM interactivity.

Fun Stuff

I do host a few other devices in my lab such Lutron hub, HD HomeRun TV, external Blu-Ray drive, etc...

Operating System

PiKVM

Proxmox

This cluster is not hyper-converged as block storage is provided by the underlying PVE CEPH cluster using rook-ceph-external.

Ubuntu OS

The cluster is running on a mixed of Ubuntu Server some deployed on bare-metal and other as virtual machine. Ubuntu Server is great given it support of AMD64 and ARM64 architecture. Finally, I am running various version of Ubuntu depending on the workload requirement (i.e.: newer kernel for specific hardware).

Template are provisioned using Promox template using Ubuntu Cloud Image which are slightly smaller that standard ISO image and more optimized for cloud native application. There is Ubuntu Minimal Cloud Images which I would like to explore even more.

Synology

My DS1812+ was commissioned in March 2014 and has been my first dip in the world of homelabbing. It has been my primary server for quite a while and my main network attach storage for almost a decade. It been through many disks configuration, software upgrade as well couple of power supplies but in early October 2023, Synology End-of-Life Announcement for DSM 6.2 means this device sunset as come. Given it been my workhorse for many years I have learn quite a lot from it and my greatest lesson is to move away from proprietary ecosystem. Finally, I do have an extra DS218J located at my parents house as my primary offsite backup for invaluable data.

Hypervisor

PCI-E Passthrough

Infrastructure as Code

Cloud Init

Terraform

This cluster consists of VMs provisioned on PVE via the Terraform Proxmox provider.

Ansible

These run k3s using the Ansible playbook of my own.

Kubernetes

This repo generally attempts to follow the structure and practices of the excellent k8s-at-home/template-cluster-k3, check it out if you're uncomfortable starting out with an immutable operating system.

K3S

The cluster is running on a mixed of Ubuntu AMD64 Cloud Image deployed via Terraform as virtual mac2hine on my Proxmox cluster.

  • List item

    • Nested list item indented by 3 spaces
  • Server nodes (aka control node) are defined as a host running the k3s server command, with control-plane and datastore components managed by K3s.

  • Agent nodes (aka worker node) are defined as a host running the k3s agent command, without any datastore or control-plane components. These nodes are a mixed and match of version in order to test against a large set:

On top of this cluster is extent onto six (6) Raspberry Pi4 8GB which are power via POE. Ubuntu Server ARM64 Jammy Jellyfish is running on these which are even splitted between server and agent nodes.

In the end, this K3S cluster is multi-architecture running a varieties of operating system, linux kernel and heterogeneous configuration.

Core

  • kube-vip provides Kubernetes clusters with a virtual IP and load balancer for both the control plane (for building a highly-available cluster) and Kubernetes Services of type LoadBalancer without relying on any external hardware or software.
  • metal-lb is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.
  • longhorn is a distributed block storage system for Kubernetes.
  • rook-ceph: Provides persistent volumes, allowing any application to consume RBD block storage from the underlying PVE cluster.
  • traefik: Provides ingress cluster services.
  • rancher is an open source container management platform built for organizations that deploy containers in production.
  • kured is a daemonset that performs safe automatic node reboots when the need to do so is indicated by the package management system of the underlying OS.
  • sops: Encrypts secrets which is safe to store - even to a public repository.

Repository structure

The Git repository contains the following directories under cluster and are ordered below by how Flux will apply them.

  • base directory is the entrypoint to Flux
  • crds directory contains custom resource definitions (CRDs) that need to exist globally in your cluster before anything else exists
  • core directory (depends on crds) are important infrastructure applications (grouped by namespace) that should never be pruned by Flux
  • apps directory (depends on core) is where your common applications (grouped by namespace) could be placed, Flux will prune resources here if they are not tracked by Git anymore
./cluster
├── ./apps
├── ./base
├── ./core
└── ./crds

GitOps

Flux watches my cluster folder (see Directories below) and makes the changes to my cluster based on the YAML manifests.

Dependabot watches my github action directory looking for dependency updates, when they are found a PR is automatically created. When PRs are merged, applies the changes to my repo.

Renovate watches my entire repository looking for dependency updates, when they are found a PR is automatically created. When PRs are merged, Flux applies the changes to my cluster.

Reference, Resources & Links

A big thank you goes to these awesome people and their projects who inspired me to do this project. In no specific order

Also I want to thank you the awesome k8s-at-home community for all their work on their Helm Charts which helped me a lot.

License

MIT – © 2023 Simon HARVEY

About

This is my homelab, there are many link this but this one is mine.

https://example.com

License:MIT License


Languages

Language:Shell 51.4%Language:HCL 29.3%Language:Jinja 17.8%Language:Smarty 0.8%Language:Dockerfile 0.4%Language:Python 0.3%