There are 2 ex280 classes on LinuxAcademy, an OLD one and a NEW one.
The NEW one has "ex280" in lower case.
The Openshift cluster for this training uses the following configuration
Host machine
- Macbook Pro
- 4-core
- 16gb RAM
- 256gb SSD
- VirtualBox
- Host-Only Network as follows
- vboxnet0
- IPv4 192.168.10.1
- Mask 255.255.255.0 (equivalent to 192.168.10.1/24)
- IPv6 not set
- DHCP not enabled
- Host-Only Network as follows
Router VM to provide DNS, router services, and for running Ansible Playbooks
- Hostname router.example.com
- Centos 7.6.1810 64bit Minimal Install
- 2 virtual CPUs
- 1gb RAM
- 8gb Disk
- NIC 1
- Virtual Box Host-Only Network 192.168.10.0/24 (vboxnet0)
- IP 192.168.10.2
- No Gateway
- No DNS
- NIC 2
- Virtual Box NAT Network
- DHCP
Openshift VMs:
- Master Node 1
- Hostname master1.example.com
- 192.168.10.11 (vboxnet0)
- Master Node 2
- Hostname master2.example.com
- 192.168.10.12 (vboxnet0)
- Master Node 3
- Hostname master3.example.com
- 192.168.10.13 (vboxnet0)
- Infrastructure Worker Node 1
- Hostname infra1.example.com
- 192.168.10.21 (vboxnet0)
- Infrastructure Worker Node 2
- Hostname infra2.example.com
- 192.168.10.22 (vboxnet0)
- Compute Worker Node 1
- Hostname compute1.example.com
- 192.168.10.31 (vboxnet0)
- Compute Worker Node 2
- Hostname compute2.example.com
- 192.168.10.32 (vboxnet0)
- Load Balancer LB
- Hostname lb.example.com
- 192.168.10.41 (vboxnet0)
All Openshift VMs are the following:
- Centos 7.6.1810 64bit Minimal Install
- 2 virtual CPUs
- 2gb RAM
- Network
- Host-Only Network 192.168.10.0/24 (vboxnet0)
- Gateway 192.168.10.2
- DNS 192.168.10.2
- Search Domains: example.com
- Disk1
- 13gb
- /dev/sda
- 10gb / (root)
- 3gb swap
- Disk2
- 20gb
- /dev/sdb
- UN-allocated
- will be provisioned later as vg_docker thin pool
Web Access
- OKD Master Console
-
Update
/etc/hosts
to add the hostssudo vim /etc/hosts
-
Add the following /etc/hosts entries:
192.168.10.20 desktop.example.com desktop 192.168.10.2 router.example.com router 192.168.10.11 master1.example.com master1 192.168.10.12 master2.example.com master2 192.168.10.13 master3.example.com master3 192.168.10.21 infra1.example.com infra1 192.168.10.22 infra2.example.com infra2 192.168.10.31 compute1.example.com compute1 192.168.10.32 compute2.example.com compute2 192.168.10.41 lb.example.com lb
-
-
Clear the local DNS cache
sudo dscacheutil -flushcache
- Set up the Router VM first so you have networking for the others
-
Install CentOS
-
Become the root user
-
Change /etc/sudoers to use NOPASSWD for wheel group
visudo
-
Change the file to look like this:
## Allows people in group wheel to run all commands # %wheel ALL=(ALL) ALL ## Same thing without a password %wheel ALL=(ALL) NOPASSWD: ALL
-
-
Create your local non-root user
useradd -G wheel <userid> passwd <userid>
-
Update the kernel and reboot
yum -y update && reboot
-
Set up all ancillary packages
# Install the standard server packages yum -y group install core base # Install support packages and Epel repo yum -y install git iptables-services epel-release pyOpenSSL # Install VirtualBox guest tools yum -y install dkms kernel-devel # (using the VirtualBox menu: Devices, Insert Guest Additions CD Image) mount /dev/cdrom /mnt /mnt/VBoxLinuxAdditions.run umount /mnt && eject cdrom # Disable the Epel repo yum-config-manager --disable epel
-
Update /etc/hosts on all VMs
cat <<EOF2 >> /etc/hosts 192.168.10.2 router.example.com router 192.168.10.11 master1.example.com master1 192.168.10.12 master2.example.com master2 192.168.10.13 master3.example.com master3 192.168.10.21 infra1.example.com infra1 192.168.10.22 infra2.example.com infra2 192.168.10.31 compute1.example.com compute1 192.168.10.32 compute2.example.com compute2 192.168.10.41 lb.example.com lb EOF2
-
Configure the kernel to allow forwarding as a router
sysctl -w net.ipv4.ip_forward=1 > /etc/sysctl.d/ip_forward.conf
-
Configure firewalld as a router
- Assumptions
-
enp0s8 is assumed to be the public interface connected to the WAN or external network
-
192.168.10.0/24 is assumed to be the private VM VLAN network
firewall-cmd --permanent --direct --passthrough ipv4 -t nat \ -I POSTROUTING -o enp0s8 -j MASQUERADE -s 192.168.10.0/24 firewall-cmd --change-interface=enp0s8 --zone=external --permanent firewall-cmd --set-default-zone=internal firewall-cmd --complete-reload systemctl restart network && systemctl restart firewalld
-
- Assumptions
-
Set up DNSMASQ as the DNS server
cp /etc/resolv.conf /etc/resolv.conf.orig cp /etc/resolv.conf /etc/resolv.dnsmasq echo -e 'search example.com\nnameserver 127.0.0.1' > /etc/resolv.conf cat <<EOF > /etc/dnsmasq.d/dnsmasq_lab.conf resolv-file=/etc/resolv.dnsmasq resolv-file=/etc/resolv.dnsmasq address=/router.example.com/192.168.10.2 address=/master1.example.com/192.168.10.11 address=/master2.example.com/192.168.10.12 address=/master3.example.com/192.168.10.13 address=/infra1.example.com/192.168.10.21 address=/infra2.example.com/192.168.10.22 address=/compute1.example.com/192.168.10.31 address=/compute2.example.com/192.168.10.32 address=/lb.example.com/192.168.10.41 address=/apps.example.com/192.168.10.41 address=/openshift.example.com/192.168.10.41 address=/openshift-internal.example.com/192.168.10.41 EOF systemctl enable --now dnsmasq.service firewall-cmd --add-service=dns --perm --zone=internal firewall-cmd --reload
-
Stop NetworkManager from replacing resolv.conf
sed -i '/^\[main\]/a dns=none' /etc/NetworkManager/NetworkManager.conf systemctl restart NetworkManager.service
-
Test DNSMASQ
host `hostname` host www.google.com
-
Install Ansible 2.6
# Install ansible version 2.6 repo yum -y install centos-release-ansible26 # Install ansible version 2.6.14 yum -y install ansible-2.6.14
-
Install the openshift 3.11 ansible repo
yum -y install centos-release-openshift-origin311
-
Install the openshift 3.11 oc CLI
yum -y install origin-clients
-
Install openshift ansible playbooks
yum -y install openshift-ansible
-
Install the openshift ansible 3.11 playbooks from github
-
As your NON-ROOT user
mkdir ~/github git clone https://github.com/openshift/openshift-ansible.git ~/github/openshift-ansible cd ~/github/openshift-ansible git checkout origin/release-3.11
-
-
Install this repo
-
As your NON-ROOT user
git clone git@github.com:robbrucks/linux-academy-ex280.git ~/github/linux-academy-ex280
-
You can use either the github version of the openshift-ansible
playbooks you cloned in your home directory, or the repo version
installed in /usr/share/ansible/openshift-ansible
on the router VM.
-
Disable firewalld
systemctl disable --now firewalld.service
-
Install Docker
yum -y install docker-1.13.1
-
Verify
/dev/sdb
exists and has no partitionslsblk
-
Configure docker storage you prefer
-
Using overlay2
cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS='/dev/sdb' VG=vg_docker DATA_SIZE=95%VG STORAGE_DRIVER=overlay2 CONTAINER_ROOT_LV_NAME=lv_docker CONTAINER_ROOT_LV_MOUNT_PATH=/var/lib/docker CONTAINER_ROOT_LV_SIZE=100%FREE EOF docker-storage-setup
-
Using thinpool
cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS='/dev/sdb' DATA_SIZE=99%VG VG=vg_docker CONTAINER_THINPOOL=lv_docker EOF docker-storage-setup
-
Verify Docker storage is set
-
ONLY ON THE OPENSHIFT VMS
vgs lvs
-
-
Enable and start docker
-
ONLY ON THE OPENSHIFT VMS
systemctl enable --now docker
-
-
Login to the ROUTER VM as your local NON-ROOT USER
-
Set up shared SSH keys
ssh master.example.com "mkdir ~/.ssh;chmod 700 ~/.ssh" ssh infra.example.com "mkdir ~/.ssh;chmod 700 ~/.ssh" ssh compute.example.com "mkdir ~/.ssh;chmod 700 ~/.ssh" ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N '' cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys echo 'StrictHostKeyChecking=no' > ~/.ssh/config chmod 600 ~/.ssh/config scp ~/.ssh/* master.example.com:~/.ssh scp ~/.ssh/* infra.example.com:~/.ssh scp ~/.ssh/* compute.example.com:~/.ssh
-
Copy the
inventory
file from this repo to the non-root user's home directory -
Edit the
inventory
file and replace___YOUR_SSO___
with your non-root userid -
Check the openshift facts to ensure correctness
ansible-playbook -i ~/inventory /usr/share/ansible/openshift-ansible/playbooks/byo/openshift_facts.yml
-
Pre-Check the Cluster
ansible-playbook -i ~/inventory /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
-
Create the Cluster
ansible-playbook -i ~/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
-
Test the Cluster from the Master Node
oc get nodes oc get pods oc status oc describe all
-
Modify the saved iptables to allow nfs, since the running iptables is controlled by OKD
sed -i '/^COMMIT/i -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT' /etc/sysconfig/iptables sed -i '/^COMMIT/i -A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT' /etc/sysconfig/iptables sed -i '/^COMMIT/i -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT' /etc/sysconfig/iptables sed -i '/^COMMIT/i -A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 2049 -j ACCEPT' /etc/sysconfig/iptables
-
Install NFS
yum -y install nfs-utils mkdir -p /home/data/persistent{1,2,3,4,5,etcd} chown -R nfsnobody:nfsnobody /home/data chmod 700 /home/data/persistent* cat <<EOF >/etc/exports.d/dbvol.exports /home/data/persistent1 *(rw,async,all_squash) /home/data/persistent2 *(rw,async,all_squash) /home/data/persistent3 *(rw,async,all_squash) /home/data/persistent4 *(rw,async,all_squash) /home/data/persistent5 *(rw,async,all_squash) /home/data/persistentetcd *(rw,async,all_squash) EOF setsebool -P virt_use_nfs 1 systemctl enable nfs
-
Reboot to update iptables and start NFS
reboot
-
Validate that everything is working
exportfs -a showmount -e
-
Un-comment the metrics settings in the inventory
-
Run the ansible playbook to install the metrics
ansible-playbook -i ./inventory \ /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
It will take quite a while for the metrics system to build the cassandra DB and start up. Mine took 15 minutes. Be patient.
Kube Ops for Openshift: https://github.com/raffaelespazzoli/kube-ops-view/tree/ocp
oc new-project funkybox
oc create sa kube-ops-view
oc adm policy add-scc-to-user anyuid -z kube-ops-view
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:funkybox:kube-ops-view
oc apply -f https://raw.githubusercontent.com/raffaelespazzoli/kube-ops-view/ocp/deploy-openshift/kube-ops-view.yaml
oc expose svc kube-ops-view --name=funkybox --hostname funkybox.example.com
oc get route | grep kube-ops-view | awk '{print $2}'