Kubeinit / kubeinit

Ansible automation to have a KUBErnetes cluster INITialized as soon as possible...

Home Page:https://www.kubeinit.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cannot use `ansible_default_ipv4_address` on multi hypervisor setup

rmlandvreugd opened this issue · comments

Describe the bug
When using multiple hypervisors, the facts ansible_default_ipv4_address and distribution_family are set for the first hypervisor only.
subsequent hypervisors do not get these facts set

To Reproduce
Steps to reproduce the behavior:

  1. Clone ' https://github.com/Kubeinit/kubeinit.git'
  2. Prepare playbook '...'
  3. Run with these variable
---
#scenario_variables.yml
kubeinit_stop_after_task: task-prepare-hypervisor
  1. See error
 _____________________________________________________________
/ TASK [../../roles/kubeinit_prepare : Add ansible facts to   \
| hostvars name={{ kubeinit_deployment_node_name }},          |
| ansible_default_ipv4_address={{                             |
| gather_results.ansible_facts.ansible_default_ipv4.address   |
| }}, ansible_hostname={{                                     |
| gather_results.ansible_facts.ansible_hostname }},           |
| ansible_distribution={{                                     |
| gather_results.ansible_facts.ansible_distribution }},       |
| ansible_distribution_major_version={{                       |
| gather_results.ansible_facts.ansible_distribution_major_ver |
\ sion }}, distribution_family={{ distro_family }}]           /
 -------------------------------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Monday 19 July 2021  12:19:04 +0200 (0:00:00.115)       0:00:02.652 ***********
Monday 19 July 2021  12:19:04 +0200 (0:00:00.115)       0:00:02.651 ***********
changed: [hypervisor-01 -> ux83.able.nv] => {
    "add_host": {
        "groups": [],
        "host_name": "hypervisor-01",
        "host_vars": {
            "ansible_default_ipv4_address": "10.80.40.118",
            "ansible_distribution": "Fedora",
            "ansible_distribution_major_version": "34",
            "ansible_hostname": "ux83",
            "distribution_family": "Fedora"
        }
    },
    "changed": true
}
 ___________________________________________________________
/ TASK [../../roles/kubeinit_prepare : Clear gather_results \
\ gather_results=None]                                      /
 -----------------------------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Monday 19 July 2021  12:19:04 +0200 (0:00:00.082)       0:00:02.735 ***********
Monday 19 July 2021  12:19:04 +0200 (0:00:00.082)       0:00:02.734 ***********
ok: [hypervisor-01 -> ux83.able.nv] => {
    "ansible_facts": {
        "gather_results": null
    },
    "changed": false
}
ok: [hypervisor-02 -> ux84.able.nv] => {
    "ansible_facts": {
        "gather_results": null
    },
    "changed": false
}
ok: [hypervisor-03 -> ux85.able.nv] => {
    "ansible_facts": {
        "gather_results": null
    },
    "changed": false
}
 ________________________________________________________

Expected behavior
I expected the add_host task to run on all specified hypervisors

Screenshots
If applicable, add screenshots to help explain your problem.

Infrastructure

  • Hypervisors OS: Fedora
  • Version 34

Deployment command

ansible-playbook --user root -i ./hosts/okd/inventory --become --become-user root --extra-vars "@scenario_variables.yml" ./playbooks/okd.yml -v

Inventory file diff

Run the following command:

diff \
    <(curl https://raw.githubusercontent.com/Kubeinit/kubeinit/main/kubeinit/hosts/okd/inventory) \
    ./hosts/okd/inventory

And paste the output:

25,26c25,26
< kubeinit_inventory_cluster_name=okdcluster
< kubeinit_inventory_cluster_domain=kubeinit.local
---
> kubeinit_inventory_cluster_name=xxx
> kubeinit_inventory_cluster_domain=domain.something
33c33
< ram=25165824
---
> ram=33554432
43c43
< vcpus=8
---
> vcpus=4
61c61
< ram=16777216
---
> ram=25165824
76,78c76,78
< hypervisor-01 ansible_host=nyctea
< # hypervisor-02 ansible_host=tyto
< # hypervisor-03 ansible_host=strix
---
> hypervisor-01 ansible_host=hv1.domain.something
> hypervisor-02 ansible_host=hv2.domain.something
> hypervisor-03 ansible_host=hv3.domain.something
89,90c89,90
< okd-controller-02 ansible_host=10.0.0.2 mac=52:54:00:53:75:61 interfaceid=fb2028cf-dfb9-4d17-827d-3fae36cb3e98 target=hypervisor-01 type=virtual
< okd-controller-03 ansible_host=10.0.0.3 mac=52:54:00:96:67:20 interfaceid=d43b705e-86ce-4955-bbf4-3888210af82e target=hypervisor-01 type=virtual
---
> okd-controller-02 ansible_host=10.0.0.2 mac=52:54:00:53:75:61 interfaceid=fb2028cf-dfb9-4d17-827d-3fae36cb3e98 target=hypervisor-02 type=virtual
> okd-controller-03 ansible_host=10.0.0.3 mac=52:54:00:96:67:20 interfaceid=d43b705e-86ce-4955-bbf4-3888210af82e target=hypervisor-03 type=virtual
97,100c97,100
< okd-compute-02 ansible_host=10.0.0.7 mac=52:54:00:33:75:35 interfaceid=a9cc79f3-0892-47af-9195-6c28c718c2a0 target=hypervisor-01 type=virtual
< # okd-compute-03 ansible_host=10.0.0.8 mac=52:54:00:51:64:75 interfaceid=889e6b2d-f4af-4747-aeb5-2e82d136873b target=hypervisor-01 type=virtual
< # okd-compute-04 ansible_host=10.0.0.9 mac=52:54:00:58:69:54 interfaceid=1a0dc524-6e85-4d2e-9498-aa86c2ac2c9f target=hypervisor-01 type=virtual
< # okd-compute-05 ansible_host=10.0.0.10 mac=52:54:00:95:95:18 interfaceid=cc90a978-9d3c-4fe7-8a7e-df072d9411b4 target=hypervisor-01 type=virtual
---
> okd-compute-02 ansible_host=10.0.0.7 mac=52:54:00:33:75:35 interfaceid=a9cc79f3-0892-47af-9195-6c28c718c2a0 target=hypervisor-02 type=virtual
> okd-compute-03 ansible_host=10.0.0.8 mac=52:54:00:51:64:75 interfaceid=889e6b2d-f4af-4747-aeb5-2e82d136873b target=hypervisor-03 type=virtual
> # okd-compute-04 ansible_host=10.0.0.9 mac=52:54:00:58:69:54 interfaceid=1a0dc524-6e85-4d2e-9498-aa86c2ac2c9f target=hypervisor-02 type=virtual
> # okd-compute-05 ansible_host=10.0.0.10 mac=52:54:00:95:95:18 interfaceid=cc90a978-9d3c-4fe7-8a7e-df072d9411b4 target=hypervisor-03 type=virtual
113c113
< okd-bootstrap-01 ansible_host=10.0.0.200 mac=52:54:00:30:69:71 interfaceid=c9e9b095-ab1c-4feb-8044-335d695e4f3d target=hypervisor-01 type=virtual
---
> okd-bootstrap-01 ansible_host=10.0.0.200 mac=52:54:00:30:69:71 interfaceid=c9e9b095-ab1c-4feb-8044-335d695e4f3d target=hypervisor-03 type=virtual

Additional context
Looking at the documentation of ansible.builtin.add_host. It appears that looping over the hosts in this fashion will not work by design

Hello,

What you say is correct, but we gather the facts correctly, check: https://github.com/Kubeinit/kubeinit/blob/main/kubeinit/roles/kubeinit_prepare/tasks/gather_host_facts.yml#L51

What you see in the logs is right, ansible_default_ipv4_address and distribution_family are only gathered in the first HV but in the prepare step, we make that consistent across all the cluster nodes.

@gmarcy just to confirm but I think this is not a real bug.

That is the exact point

add_host is done only once, not once per host, as per ansible.builtin.add_host notes

It is at a later stage however, that the fact ansible_default_ipv4_address is used via hostvars[kubeinit_deployment_node_name].ansible_default_ipv4_address but not set for the subsequent hv's
for an example see: https://github.com/Kubeinit/kubeinit/blob/main/kubeinit/roles/kubeinit_libvirt/tasks/40_ovn_setup.yml#L24
the first run/loop is fine; it is the first hv, thus the fact is set, but the subsequent hv's do not have this fact set.

same goes for the distribution_family
see: https://github.com/Kubeinit/kubeinit/blob/main/kubeinit/roles/kubeinit_libvirt/tasks/20_ovn_install.yml#L90

I'm testing moving the gather_host_facts call to a new task-gather-hypervisor-facts in the previous play. Appears to address the issue, but will need more multi-hv cluster testing. Fortunately that appears to be the only add_host call made when we are running on all hypervisors in parallel.

@rmlandvreugd this should be fixed in #409 now

@gmarcy: the PR works like a charm

@ccamacho I'll close this one.

Thanks for the collab