lablabs / ansible-role-rke2

Ansible Role to install RKE2 Kubernetes.

Home Page:https://galaxy.ansible.com/ui/standalone/roles/lablabs/rke2/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

feature: Solved the problem with `Playbook stuck while starting the RKE2 service on agents` with Debian VMs

dannyloxavier opened this issue · comments

Summary

Had this problem and workaround it installing podman in all VMs before running the ansible playbook.

Worked flawless.

In my case, my playbook was this one:

---

- name: Deploy RKE2
  hosts: all
  become: yes
  pre_tasks:
    - name: Install podman
      ansible.builtin.apt:
        name: podman
        state: latest
        update_cache: yes
  roles:
     - role: lablabs.rke2

You can try by yourself with this Vagrantfile:

#!/usr/bin/env ruby

maquinas = {
  "master-01" => "192.168.56.10/24",
  "worker-01" => "192.168.56.20/24",
  "worker-02" => "192.168.56.21/24",
  "worker-03" => "192.168.56.22/24"
}


Vagrant.configure("2") do |config|
  maquinas.each do |nome, ip|
    config.vm.define nome do |maquina|
      maquina.vm.box = "debian/bookworm64"
      maquina.vm.hostname = "%s" % nome
      maquina.vm.network "private_network", ip: ip, libvirt__network_name: "lab_rke2"
      maquina.vm.provider "libvirt" do |lv|
        lv.memory = 2048
        lv.default_prefix = "lab_rke2_"
      end
    end
    config.nfs.verify_installed = false
    config.vm.synced_folder '.', '/vagrant', disabled: true
    config.vm.provision "shell" do |s|
      ssh_pub_key = File.readlines(File.join(Dir.home, ".ssh/id_rsa.pub")).first.strip
      s.inline = <<-SHELL
      echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
      SHELL
    end
  end
end

and this hosts file:

[masters]
master-01 ansible_host=192.168.56.10 ansible_user=vagrant rke2_type=server

[workers]
worker-01 ansible_host=192.168.56.20 ansible_user=vagrant rke2_type=agent
worker-02 ansible_host=192.168.56.21 ansible_user=vagrant rke2_type=agent
worker-03 ansible_host=192.168.56.22 ansible_user=vagrant rke2_type=agent

[k8s_cluster:children]
masters
workers

Issue Type

Feature Idea

Hmm.... not so flawless like I said... =/

root@master-01:~# /var/lib/rancher/rke2/bin/kubectl         --kubeconfig /etc/rancher/rke2/rke2.yaml get pods --all-namespaces
NAMESPACE     NAME                                                    READY   STATUS                  RESTARTS        AGE
kube-system   etcd-master-01                                          1/1     Running                 0               145m
kube-system   helm-install-rke2-canal-246gp                           0/1     Completed               0               146m
kube-system   helm-install-rke2-coredns-6nprw                         0/1     Completed               0               146m
kube-system   helm-install-rke2-ingress-nginx-rwvdv                   0/1     Pending                 0               146m
kube-system   helm-install-rke2-metrics-server-jmpvg                  0/1     Pending                 0               146m
kube-system   kube-apiserver-master-01                                1/1     Running                 0               145m
kube-system   kube-controller-manager-master-01                       1/1     Running                 0               145m
kube-system   kube-proxy-worker-01                                    1/1     Running                 0               145m
kube-system   kube-proxy-worker-02                                    1/1     Running                 0               145m
kube-system   kube-proxy-worker-03                                    1/1     Running                 0               145m
kube-system   kube-scheduler-master-01                                1/1     Running                 0               145m
kube-system   rke2-canal-8mtzd                                        0/2     Init:CrashLoopBackOff   29 (5m5s ago)   145m
kube-system   rke2-canal-jjc4m                                        2/2     Running                 0               145m
kube-system   rke2-canal-q5wmd                                        2/2     Running                 0               145m
kube-system   rke2-canal-tsmx4                                        2/2     Running                 0               145m
kube-system   rke2-coredns-rke2-coredns-776d5cfd89-xv9n2              0/1     Pending                 0               145m
kube-system   rke2-coredns-rke2-coredns-autoscaler-6f964d8b7b-nn6lp   0/1     Pending                 0               145m

I'm going to solve this. =)
Have a nice day.

Solved!

Just increase memory! 🤣