Kubeinit / kubeinit

Ansible automation to have a KUBErnetes cluster INITialized as soon as possible...

Home Page:https://www.kubeinit.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inventory file not used during Quay image deployment

Gl1TcH-1n-Th3-M4tR1x opened this issue · comments

After deploying with the Quay image, changes to the Inventory file were not taken into consideration:

Changed the HDD size for master and compute nodes to 120GB and 25GB memory for the compute nodes with:

#
# Common variables for the inventory
#

[all:vars]

#
# Internal variables
#

ansible_python_interpreter=/usr/bin/python3
ansible_ssh_pipelining=True
ansible_ssh_common_args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=accept-new'

#
# Inventory variables
#

#
# The default for the cluster name is {{ kubeinit_cluster_distro + 'cluster' }}
# You can override this by setting a specific value in kubeinit_inventory_cluster_name

# kubeinit_inventory_cluster_name=mycluster

kubeinit_inventory_network_name=kimgtnet0

#
# Cluster definitions
#

# The networks you will use for your kubeinit clusters. The network name will be used
# to create a libvirt network for the cluster guest vms. The network cidr will set
# the range of addresses reserved for the cluster nodes. The gateway offset will be
# used to select the gateway address within the range, a negative offset starts at the
# end of the range, so for network=10.0.0.0/24, gateway_offset=-2 will select 10.0.0.254
# and gateway_offset=1 will select 10.0.0.1 as the address. Other offset attributes
# follow the same convention.

[kubeinit_networks]
# kimgtnet0 network=10.0.0.0/24 gateway_offset=-2 nameserver_offset=-3 dhcp_start_offset=1 dhcp_end_offset=-4
# kimgtnet1 network=10.0.1.0/24 gateway_offset=-2 nameserver_offset=-3 dhcp_start_offset=1 dhcp_end_offset=-4

# The clusters you are deploying using kubeinit. If there are no clusters defined here
# then kubeinit will assume you are only using one cluster at a time and will use the
# network defined by kubeinit_inventory_network.

[kubeinit_clusters]
# cluster0 network_name=kimgtnet0
# cluster1 network_name=kimgtnet1
#
# If variables  are defined in this section, they will have precedence when setting
# kubeinit_inventory_post_deployment_services and kubeinit_inventory_network_name
#
# clusterXXX network_name=kimgtnetXXX post_deployment_services="none"
# clusterYYY network_name=kimgtnetYYY post_deployment_services="none"

#
# Hosts definitions
#

# The cluster's guest machines can be distributed across mutiple hosts. By default they
# will be deployed in the first Hypervisor. These hypervisors are activated and used
# depending on how they are referenced in the kubeinit spec string.
#
# When we are running the setup-playbook, if a hypervisor host has an ssh_hostname attribute
# then a .ssh/config file will be created and an entry mapping the ansible_host to that
# ssh hostname will be created. In the first commented out example we would associate
# the ansible_host of the first hypervisor host "nyctea" with the hostname provided, it
# can be a short or fully qualified name, but it needs to be resolvable on the host we
# are running the kubeinit setup from. The second example uses a host ip address, which
# can be useful in those cases where the host you are using doesn't have a dns name.

[hypervisor_hosts]
hypervisor-01 ansible_host=nyctea
hypervisor-02 ansible_host=tyto
# hypervisor-01 ansible_host=nyctea ssh_hostname=server1.example.com
# hypervisor-02 ansible_host=tyto ssh_hostname=192.168.222.202

# The inventory will have one host identified as the bastion host. By default, this role will
# be assumed by the first hypervisor, which is the same behavior as the first commented out
# line. The second commented out line would set the second hypervisor to be the bastion host.
# The final commented out line would set the bastion host to be a different host that is not
# being used as a hypervisor for the guests VMs for the clusters using this inventory.

[bastion_host]
# bastion target=hypervisor-01
# bastion target=hypervisor-02
# bastion ansible_host=bastion

# The inventory will have one host identified as the ovn-central host. By default, this role
# will be assumed by the first hypervisor, which is the same behavior as the first commented
# out line. The second commented out line would set the second hypervisor to be the ovn-central
# host.

[ovn_central_host]
# ovn-central target=hypervisor-01
# ovn-central target=hypervisor-02

#
# Setup host definition (used only with the setup-playbook.yml)
#

# This inventory will have one host identified as the setup host. By default, this will be
# localhost, which is the same behavior as the first commented out line. The second commented
# out line would set the first hypervisor host to be the setup host. The final commented out
# line would set the setup host to be a different host that is not being used as a hypervisor
# in this inventory.

[setup_host]
# kubeinit-setup ansible_host=localhost
# kubeinit-setup ansible_host=nyctea
# kubeinit-setup ansible_host=192.168.222.214

[kubeinit_cluster:vars]
cluster_domain=kubeinit.local
hypervisor_name_pattern=hypervisor-%02d
controller_name_pattern=controller-%02d
compute_name_pattern=compute-%02d
post_deployment_services=none

[kubeinit_cluster]

[kubeinit_network:vars]
network=10.0.0.0/24
gateway_offset=-2
nameserver_offset=-3
dhcp_start_offset=1
dhcp_end_offset=-4

[kubeinit_network]

#
# Cluster node definitions
#

# Controller, compute, and extra nodes can be configured as virtual machines or using the
# manually provisioned baremetal machines for the deployment.

# Only use an odd number configuration, this means enabling only 1, 3, or 5 controller nodes
# at a time.

[controller_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'coreos', 'rke': 'ubuntu'}
disk=120G
ram=25165824
vcpus=8
maxvcpus=16
type=virtual
target_order=hypervisor-01

[controller_nodes]

[compute_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'coreos', 'rke': 'ubuntu'}
disk=120G
ram=25165824
vcpus=8
maxvcpus=16
type=virtual
target_order="hypervisor-02,hypervisor-01"

[compute_nodes]

[extra_nodes:vars]
os={'cdk': 'ubuntu', 'okd': 'coreos'}
disk=20G
ram={'cdk': '8388608', 'okd': '16777216'}
vcpus=8
maxvcpus=16
type=virtual
target_order="hypervisor-02,hypervisor-01"

[extra_nodes]
juju-controller when_distro=cdk
bootstrap when_distro=okd

# Service nodes are a set of service containers sharing the same pod network.
# There is an implicit 'provision' service container which will use a base os
# container image based upon the service_nodes:vars os attribute.

[service_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'centos', 'rke': 'ubuntu'}
target_order=hypervisor-01

[service_nodes]
service services="bind,dnsmasq,haproxy,apache,registry" # nexus

Here the command line used to deploy the cluster:

podman run --rm -it     -v ~/.ssh/id_rsa:/root/.ssh/id_rsa:z     -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub:z     -v ~/.ssh/config:/root/.ssh/cong:z     quay.io/kubeinit/kubeinit:$TAG         -vvv --user root         -e kubeinit_spec=okd-libvirt-1-3-1         -i ./kubeinit/inventory         ./kubeinit/playbook.yml
Trying to pull quay.io/kubeinit/kubeinit:2.0.1...
Getting image source signatures
Copying blob 16b78ed2e822 done  
Copying blob 131f1a26eef0 done  
Copying blob a6cf203719b0 done  
Copying blob 605b7063f5cc done  
Copying blob 5c75918c2841 done  
Copying blob 2a9efe8e1c66 done  
Copying config be8851a661 done  
Writing manifest to image destination
Storing signatures
ansible-playbook [core 2.12.1]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible-playbook
  python version = 3.9.6 (default, Aug 11 2021, 06:39:25) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
  jinja version = 3.0.3
  libyaml = True
No config file found; using defaults
host_list declined parsing /kubeinit/kubeinit/inventory as it did not pass its verify_file() method
script declined parsing /kubeinit/kubeinit/inventory as it did not pass its verify_file() method
auto declined parsing /kubeinit/kubeinit/inventory as it did not pass its verify_file() method
Parsed /kubeinit/kubeinit/inventory inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: playbook.yml ************************************************************************************************************
3 plays in ./kubeinit/playbook.yml

PLAY [Initial setup] **************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************
task path: /kubeinit/kubeinit/playbook.yml:17
Using module file /usr/lib/python3.9/site-packages/ansible/modules/setup.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 && sleep 0'
ok: [localhost]

After completing the deployment, the controller node has 25GB HDD and the worker nodes have 8GB RAM and 30GB HDD.

Fixed the problem by modifying the inventory file to:

#
# Common variables for the inventory
#

[all:vars]

#
# Internal variables
#

ansible_python_interpreter=/usr/bin/python3
ansible_ssh_pipelining=True
ansible_ssh_common_args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=accept-new'

#
# Inventory variables
#

#
# The default for the cluster name is {{ kubeinit_cluster_distro + 'cluster' }}
# You can override this by setting a specific value in kubeinit_inventory_cluster_name

# kubeinit_inventory_cluster_name=mycluster
kubeinit_inventory_cluster_domain=kubeinit.local

kubeinit_inventory_network_name=kimgtnet0

kubeinit_inventory_network=10.0.0.0/24
kubeinit_inventory_gateway_offset=-2
kubeinit_inventory_nameserver_offset=-3
kubeinit_inventory_dhcp_start_offset=1
kubeinit_inventory_dhcp_end_offset=-4

kubeinit_inventory_controller_name_pattern=controller-%02d
kubeinit_inventory_compute_name_pattern=compute-%02d

kubeinit_inventory_post_deployment_services="none"

#
# Cluster definitions
#

# The networks you will use for your kubeinit clusters.  The network name will be used
# to create a libvirt network for the cluster guest vms.  The network cidr will set
# the range of addresses reserved for the cluster nodes.  The gateway offset will be
# used to select the gateway address within the range, a negative offset starts at the
# end of the range, so for network=10.0.0.0/24, gateway_offset=-2 will select 10.0.0.254
# and gateway_offset=1 will select 10.0.0.1 as the address.  Other offset attributes
# follow the same convention.

[kubeinit_networks]
# kimgtnet0 network=10.0.0.0/24 gateway_offset=-2 nameserver_offset=-3 dhcp_start_offset=1 dhcp_end_offset=-4
# kimgtnet1 network=10.0.1.0/24 gateway_offset=-2 nameserver_offset=-3 dhcp_start_offset=1 dhcp_end_offset=-4

# The clusters you are deploying using kubeinit.  If there are no clusters defined here
# then kubeinit will assume you are only using one cluster at a time and will use the
# network defined by kubeinit_inventory_network.

[kubeinit_clusters]
# cluster0 network_name=kimgtnet0
# cluster1 network_name=kimgtnet1
#
# If variables  are defined in this section, they will have precedence when setting
# kubeinit_inventory_post_deployment_services and kubeinit_inventory_network_name
#
# clusterXXX network_name=kimgtnetXXX post_deployment_services="none"
# clusterYYY network_name=kimgtnetYYY post_deployment_services="none"

#
# Hosts definitions
#

# The cluster's guest machines can be distributed across mutiple hosts. By default they
# will be deployed in the first Hypervisor. These hypervisors are activated and used
# depending on how they are referenced in the kubeinit spec string.

[hypervisor_hosts]
hypervisor-01 ansible_host=nyctea
hypervisor-02 ansible_host=tyto

# The inventory will have one host identified as the bastion host. By default, this role will
# be assumed by the first hypervisor, which is the same behavior as the first commented out
# line. The second commented out line would set the second hypervisor to be the bastion host.
# The final commented out line would set the bastion host to be a different host that is not
# being used as a hypervisor for the guests VMs for the clusters using this inventory.

[bastion_host]
# bastion target=hypervisor-01
# bastion target=hypervisor-02
# bastion ansible_host=bastion

# The inventory will have one host identified as the ovn-central host.  By default, this role
# will be assumed by the first hypervisor, which is the same behavior as the first commented
# out line.  The second commented out line would set the second hypervisor to be the ovn-central
# host.

[ovn_central_host]
# ovn-central target=hypervisor-01
# ovn-central target=hypervisor-02

#
# Cluster node definitions
#

# Controller, compute, and extra nodes can be configured as virtual machines or using the
# manually provisioned baremetal machines for the deployment.

# Only use an odd number configuration, this means enabling only 1, 3, or 5 controller nodes
# at a time.

[controller_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'coreos', 'rke': 'ubuntu'}
disk=120G
ram=25165824
vcpus=8
maxvcpus=16
type=virtual
target_order=hypervisor-01

[controller_nodes]

[compute_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'coreos', 'rke': 'ubuntu'}
disk=120G
ram=25165824
vcpus=8
maxvcpus=16
type=virtual
target_order="hypervisor-02,hypervisor-01"

[compute_nodes]

[extra_nodes:vars]
os={'cdk': 'ubuntu', 'okd': 'coreos'}
disk=20G
ram={'cdk': '8388608', 'okd': '16777216'}
vcpus=8
maxvcpus=16
type=virtual
target_order="hypervisor-02,hypervisor-01"

[extra_nodes]
juju-controller distro=cdk
bootstrap distro=okd

# Service nodes are a set of service containers sharing the same pod network.
# There is an implicit 'provision' service container which will use a base os
# container image based upon the service_nodes:vars os attribute.

[service_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'centos', 'rke': 'ubuntu'}
target_order=hypervisor-01

[service_nodes]
service services="bind,dnsmasq,haproxy,apache,registry" # nexus

And the deployment command to:
podman run --rm -it -v ~/.ssh/id_rsa:/root/.ssh/id_rsa:z -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub:z -v ~/.ssh/config:/root/.ssh/cong:z -v ./kubeinit/inventory:/kubeinit/kubeinit/inventory quay.io/kubeinit/kubeinit:$TAG -vvv --user root -e kubeinit_spec=okd-libvirt-1-3-1 -i ./kubeinit/inventory ./kubeinit/playbook.yml