garyttt / freeipa_puppet_foreman

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Centralized UNIX Auth / Centralized sudoers / Centralized HostBasedAccessControl / Centralized Configuration

This GIT Repo provides Ansible and Shell scripts for building:

  1. FreeIPA (RedHat Identity Management) Primary Master (ipa.example.local) CentOS Stream 8.X 2-CPU/4GB RAM
  2. FreeIPA (RedHat Identity Management) Secondary Master (ipa2.example.local) CentOS Stream 8.X 2-CPU/4GB RAM
  3. Puppet Enterprise 2021.4 (puppet.example.local) CentOS Stream 8.X 2-CPU/4GB RAM
  4. Foreman 3.0.1 (foreman.example.local) CentOS Stream 8.X 2-CPU/4GB RAM

Please 'dnf upgrade' the above servers OS to the latest patches prior to proceeding.

What is FreeIPA?

Note: we will be using RedHat Identity Management (IdM) AppStream Yum Repo '@idm:DL1', the name of the FreeIPA packages are prefixed with 'ipa-' not 'freeipa-'.

Preparations

  1. Ensure all VMs are having the same host entries in /etc/hosts, edit it to the actual one for your VMs.
192.168.159.128 puppet.example.local puppet
192.168.159.129 foreman.example.local foreman
192.168.159.131 centos8.example.local centos8
192.168.159.132 ubuntu20.example.local ubuntu20
192.168.159.133 ipa.example.local ipa
192.168.159.134 jenkins.example.local jenkins
192.168.159.135 ipa2.example.local ipa2
  1. Ensure DNS Client (Resolver) is configured to search 'example.local' domain at ALL VMs.

Login as root at ALL VMs:

sed -i s/.*Domains=.*/Domains=example.local/ /etc/systemd/resolved.conf
systemctl restart systemd-resolved
hostname -i
hostname -f
  • hostname -i should not return multiple entries or else the Ansible playbook run will fail.
  • hostname -f should return the FQDN (Fully Qualified Domain Name).
  1. Ensure all VMs are having the same timezone.
ln -sf /usr/share/zoneinfo/Asia/Singapore /etc/localtime
  1. Ensure umask is 0022 in ~/.bashrc of root account at ipa and ipa2 who will usually own the package files.
  2. Ensure the run_user@controller (gtay@centos8) SSH public key is authorized by remote_user@remote_host (gtay@ALL_VMs).
  3. Ensure the remote_user (gtay) has sudo right at the remote_host (ALL VMs).
  4. Ensure curl and GIT client are installed at ALL VMs.
  5. Ensure ntpd service is inactive and chronyd service is active at ALL VMs.

Centralized UNIX Authentication: FreeIPA Primary and Secondary Master

Note: Secondary Master is a Replica Master with additional CA Server Install, at any one time either Primary Master or Secondary Master can play the role as CA Renewal Master Server via 'ipa-crlgen-manage enable' command.

  1. Login as run_user (gtay) at the controller (centos8) and clone the GIT Repo.
git clone https://github.com/garyttt/freeipa_puppet_foreman.git
cd freeipa_puppet_foreman/ansible
  1. Edit 'DNS_SERVER1' and other customized settings and provide the actual IP for your use case.
grep -iR 192.168 *
install_freeipa_replica.yaml:    DNS_SERVER1: "192.168.159.2"
install_freeipa_server.yaml:    DNS_SERVER1: "192.168.159.2"
install_freeipa_server.yaml:      prompt: The FreeIPA Server IP Address, press Enter for default of 192.168.159.133 (ipa)
install_freeipa_server.yaml:      default: "192.168.159.133"
  1. Run Ansible for Primary Master Install, take default values except admin and directory manager password for which you need to define.
ansible-playbook -vv -i inventory/hosts -l ipa install_freeipa_server.yaml -K

Inputs:

The FreeIPA Server IP Address, press Enter for default of 192.168.159.133 (ipa) [192.168.159.133]:
The FreeIPA Server FQDN, press Enter for default of ipa.example.local [ipa.example.local]:
The FreeIPA Kerberos REALM in CAPITAL, press Enter for default of DEV.EXAMPLE.LOCAL [DEV.EXAMPLE.LOCAL]:
The FreeIPA DNS Domain/Sub-Domain in lowercase, press Enter for default of dev.example.local [dev.example.local]:
The admin principal Kerberos password:
The Directory Manager password:

Options of ipa-server-install used in playbook: you may run 'ipa-server-install --help' to check.

-U, --unattaneded       unattended (un)installation never prompts the user
--setup-dns             configure bind with our zone
--no-dnssec-validation  Disable DNSSEC validation
--no-host-dns           Do not use DNS for hostname lookup during installation
--forwarder=FORWARDERS  Add a DNS forwarder. This option can be used multiple times
--forward-policy={only,first} DNS forwarding policy for global forwarders
--reserve-zone=REVERSE_ZONES The reverse DNS zone to use. This option can be used multiple times
-N, --no-ntp            do not configure ntp
--no-url-redirect       Do not automatically redirect the web UI
--mkhomedir             create home directories for users on their first login
  1. When the current task is 'Install/Configure FreeIPA Server', launch another terminal session, login as remote_user (gtay) at remote host (ipa), and tail the IPA Server Install log.
sudo tail -100f /var/log/ipaserver-install.log
  1. When it shows 'INFO The ipa-server-install command was successful', Ctrl-C to break the tailing, and restart IPA. If this is the first FreeIPA Server Install, enable it as the default CRL Generator.

Sudo to root at Primary Master (ipa):

sudo -i
ipactl restart
ipactl status
ipa-crlgen-manage enable
ipa-crlgen-manage status
  1. If there is failure and re-installation is needed:
ipa-server-install --uninstall
dnf remove -y sssd-ipa 389-ds-core
  1. Verify FreeIPA GUI https://ipa.example.local/ipa/ui

It is highly recommended to apply SSL Cert to FreeIPA GUI Web Server

Now Replica (plus CA) Install:

Login as run_user (gtay) at controller (centos8)

Edit IPA_ related FQDN, DOMAIN and REALM for your use case.

grep -i IPA_ install_freeipa*.yaml
install_freeipa_client.yaml:    IPA_FQDN: "ipa.example.local"
install_freeipa_client.yaml:    IPA_DOMAIN: "dev.examle.local"
install_freeipa_client.yaml:    IPA_REALM: "DEV.EXAMPLE.LOCAL"
install_freeipa_replica.yaml:    IPA_DOMAIN: "dev.example.local"
install_freeipa_replica.yaml:    IPA_REALM: "DEV.EXAMPLE.LOCAL"
  1. Run Ansible for Seondary Master Install, take default values except for the last prompt you need to provide admin password.
ansible-playbook -vv -i inventory/hosts -l ipa2 install_freeipa_replica.yaml -K

Inputs:

The Fully Qualified Domain Name of the IPA Primary Master (CA-CRL)), press Enter for default of ipa.example.local" [ipa.example.local]:
The Fully Qualified Domain Name of the IPA Replica Master (ipa2.example.local), press Enter for default of ipa2.example.local" [ipa2.example.local]:
The admin Kerberos principal, press Enter for default of admin@"DEV.EXAMPLE.LOCAL" [admin@DEV.EXAMPLE.LOCAL]:
The admin Kerberos password:

Options of ipa-replica-install used in playbook: you may run 'ipa-replica-install --help' to check.

-U, --unattaneded       unattended (un)installation never prompts the user
--setup-dns             configure bind with our zone
--no-dnssec-validation  Disable DNSSEC validation
--no-host-dns           Do not use DNS for hostname lookup during installation
--forwarder=FORWARDERS  Add a DNS forwarder. This option can be used multiple times
--forward-policy={only,first} DNS forwarding policy for global forwarders
--reserve-zone=REVERSE_ZONES The reverse DNS zone to use. This option can be used multiple times
-N, --no-ntp            do not configure ntp
--no-url-redirect       Do not automatically redirect the web UI
--mkhomedir             create home directories for users on their first login
  1. When the current task is 'Install/Configure FreeIPA Replica', launch another terminal session, login as remote_user (gtay) at remote host (ipa2), and tail the IPA Replica Install log.
sudo tail -100f /var/log/ipareplica-install.log
  1. When it shows 'INFO The ipa-replica-install command was successful', Ctrl-C to break the tailing, and restart IPA.

Sudo to root at Replica Master (ipa2):

sudo -i
ipactl restart
ipactl status
  1. If there is failure and re-installation is needed:

Login as root at IPA Primary Master (ipa):

ipa-replica-manage del ipa2.example.local --force
# Note: there will be error if you have not performed the fix as per last step in this section.

Login as root at IPA Replica Master (ipa2):

ipa-server-install --uninstall
dnf remove -y sssd-ipa 389-ds-core
  1. Else continue with CA Server Install, ensure CRL Generator status is 'disabled' at ipa2 as Primary Master (ipa) is acting as it.
ipa-ca-install
ipa-crlgen-manage status
  1. Verify FreeIPA GUI https://ipa2.example.local/ipa/ui

  2. A Replica Master with additional CA Server role is called Secondary Master and it is HA (High Availability) and DR (Disaster Recovery) capable when its CRL Generator status is enabled.

  3. Note that as there is no DNS Server serving zone 'example.local', the following messages are normal.

ipaserver.dns_data_management: ERROR unable to resolve host name ipa.example.local. to IP address, ipa-ca DNS record will be incomplete
OR
Unknown host ipa.example.local: Host 'ipa2.example.local' does not have corresponding DNS A/AAAA record
OR
Unknown host ipa2.example.local: Host 'ipa2.example.local' does not have corresponding DNS A/AAAA record

To fix this, we can easily add 'example.local' DNS Zone in IPA GUI and the required 'ipa.example.local.' and 'ipa2.example.local.' DNS 'A' Resource Records (note the trailing dot). Once this is done 'ipa-replica-manage list' will show no error.

Install FreeIPA Client at multiple remote hosts

  1. Login as run_user (gtay) at the controller (centos8) and clone the GIT Repo if it is not already done.
git clone https://github.com/garyttt/freeipa_puppet_foreman.git
cd freeipa_puppet_foreman/ansible
  1. Edit IPA_ related FQDN, DOMAIN and REALM for your use case if applicable.
grep IPA_.*: install_freeipa_client.yaml
install_freeipa_client.yaml:    IPA_FQDN: "ipa.example.local"
install_freeipa_client.yaml:    IPA_DOMAIN: "dev.examle.local"
install_freeipa_client.yaml:    IPA_REALM: "DEV.EXAMPLE.LOCAL"
  1. Run Ansible for FreeIPA Clients Install, take default values except admin password which you need to define.
ansible-playbook -vv -i inventory/hosts -l ipaclients install_freeipa_client.yaml -K

Inputs:

The admin Kerberos principal, press Enter for default of admin@"DEV.EXAMPLE.LOCAL" [admin@DEV.EXAMPLE.LOCAL]:
The admin Kerberos password:
  1. If there is failure and re-installation is needed, run at the Client end:

Login as root at IPA Clent:

ipa-client-install --uninstall

Populate the FreeIPA Server with some testing data

  1. Login as remote_user (gtay) at the remote_host (ipa) and clone the GIT Repo.
kinit admin
ipa config-mod --defaultshell=/bin/bash
ipa config-mod --emaildomain=example.local
git clone https://github.com/garyttt/freeipa_puppet_foreman.git
cd freeipa_puppet_foreman/ansible
bash -vx ./ipa_add_groups.sh
bash -vx ./ipa_add_users.sh
bash -vx ./ipa_add_groups_memberships.sh

FreeIPA Backup and Restore

Please setup two cron jobs at Primary (ipa) and Seconday Master (ipa2), both the Primary and Secondary Master should run the cron at different timing, example 00:00 for Primray Master and 01:00 for Secondary Master as there will be short burst of IPA Service downtime for running 'ipa-backup', typically few minutes.

Login as root at both IPA Primary Master (ipa) and Secondary Master (ipa2):

# One time effort to create /root/logs
[ ! -d ~/logs ] && mkdir ~/logs

# Create the following crons as shown
# crontab -l Primary Master
0 0 * * * cd /var/lib/ipa/backup && /sbin/ipa-backup > ~/logs/ipa_backup_`date "+\%d"`.log 2>&1 || true
30 0 * * * find /var/lib/ipa/backup -mtime +30 | xargs rm -rf > /dev/null 2>&1 || true

# crontab -l Secondary Master
0 1 * * * cd /var/lib/ipa/backup && /sbin/ipa-backup > ~/logs/ipa_backup_`date "+\%d"`.log 2>&1 || true
30 1 * * * find /var/lib/ipa/backup -mtime +30 | xargs rm -rf > /dev/null 2>&1 || true

Each full backup is presented as a folder in /var/lib/ipa/backup, If there is a real need for Disaster Recovery, the LDAP data backed up can be restored back using 'ipa-restore'.

cd /var/lib/ipa/backup
ipa-restore ipa-full-2021-MM-DD-HH-MM-SS

FreeIPA FAQ Troubleshooting and Tips

Please refer to:

Two-Factor Authentication 2FA (Global or Per-User)


Ref: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/otp

Per-User based is preferred as it is more flexible, to activate 2FA in FreeIPA GUI, access user details, check both the 'Password' and 'Two factor authentication (password + OTP)' boxes of the 'User authentication types' attribute.

We should enable 2FA for admin privileged accounts via the use of OTP (One-Time-Password), it can be done either:

  • Option 1: User is able to self-service - User can do it in GUI via 'Actions / Add OTP Token / Add and Edit'
  • Option 2: Admin is able to assist

The 'Add and Edit' option provides us the opportunity to obtain the QR code for our Authenticator mobile applications, you may use Microsoft Authenticator, Google Authenticator or FreeOTP Authenticator.

For some reason if IPA OTP Server is acting up and not working, you may disable the Per-User 2FA temporarily and enable it when issue is fixed.

kinit admin
ipa user-show --all admin.0001
ipa user-mod --user-auth-type=password admin.0001
# When issue is fixed
ipa user-mod --user-auth-type=password --user-auth-type=otp admin.0001
ipa user-show --all admin.0001

Note that it is possible to create multiple OTP Tokens for the same user.

Secure FreeIPA Server With Let’s Encrypt SSL Certificate

Ref: https://computingforgeeks.com/secure-freeipa-server-with-lets-encrypt-ssl-certificate/

  1. Login as run_user (gtay) at the controller (centos8) and clone the GIT Repo if it is not already done.
git clone https://github.com/garyttt/freeipa_puppet_foreman.git
cd freeipa_puppet_foreman/ansible
  1. Take a backup of current Apache Web Server SSL Cert and Key
cp -p /var/lib/ipa/certs/httpd.crt   /var/lib/ipa/certs/httpd.crt.orig
cp -p /var/lib/ipa/private/httpd.key /var/lib/ipa/private/httpd.key.orig
tar cvf /root/var_lib_ipa_certs_private.tar /var/lib/ipa/certs/httpd.crt /var/lib/ipa/private/httpd.key
  1. Run Ansible for FreeIPA SSL Install, please provide Email/FQDN.
ansible-playbook -vv -i inventory/hosts -l ipa install_freeipa_ssl.yaml -K

Inputs:

IPA Apache Web Server SSL Cert EMAIL Contact, press Enter for default of garyttt@singnet.com.sg [garyttt@singnet.com.sg]: 
IPA Apache Web Server SSL Cert FQDN, press Enter for default of ipa.example.local [ipa.example.local]: 
  1. When the playbook is run successfully, perform post setup one-time instructions, and verify SSL connection. Login as root at IPA Primary Master (ipa):
cat /var/lib/ipa/passwds/ipa.example.local-443-RSA && echo 
# When the renew-le.sh is run for the first-time and it asks for the pass phrase, enter the context of the above file
/root/freeipa-letsencrypt/renew-le.sh --first-time
systemctl restart httpd
ipa-certupdate
ipactl status && systemctl status ipa
openssl s_client -showcerts -verify 5 ipa.example.local:443

Note: as example.local is a POC (Proof Of Concept) private domain, the 'renew-le.sh --first-time' will fail with the following error:

An unexpected error occurred:
The server will not issue certificates for the identifier :: Error creating new order :: Cannot issue for "ipa.example.local": Domain name does not end with a valid public suffix (TLD)
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.

Centralized SSH Public Keys


Centralized sudoers (aka sudoRule in FreeIPA)


Centralized Host Based Access Control


Centralized Configuration - Install Puppet Enterprise

  1. Login as run_user (gtay) at the controller (centos8) and clone the GIT Repo if it is not already done.
git clone https://github.com/garyttt/freeipa_puppet_foreman.git
cd freeipa_puppet_foreman/ansible
  1. Edit 'PPM_IP' and provide the actual IP for your use case.
grep -iR PPM_IP: *
install_puppet_agent.yaml:    PPM_IP: "192.168.159.129"
install_puppet_master.yaml:    PPM_IP: "192.168.159.128"
puppet_agent_re_gen_csr.yaml:    PPM_IP: "192.168.159.129"
  1. Run Ansible for Puppet Enterprise Install, provide admin password.
ansible-playbook -vv -i inventory/hosts -l puppet install_puppet_master.yaml -K

Inputs:

The Puppet Primary Master Admin Password:
  1. If there is failure and re-installation is needed, run at the PE end:

Login as root:

puppet-enterprise-uninstaller

Ref: https://puppet.com/docs/pe/2019.8/uninstalling.html#uninstaller_options

  1. Else verify PE Console https://puppet.example.local

  2. It is highly recommended to apply SSL Cert to PE Console Service Web Server, please refer to: https://puppet.com/docs/pe/2019.8/use_a_custom_ssl_cert_for_the_console.html

Centralized Configuration - Install Foreman

  1. Login as run_user (gtay) at the Foreman Server (foreman) and clone the GIT Repo if it is not already done.
git clone https://github.com/garyttt/freeipa_puppet_foreman.git
cd freeipa_puppet_foreman/ansible

Sudo to root at Foreman Server (foreman):

sudo -i
./foreman_install_firewall_rules.sh
./foreman_install.sh
  1. If there is failure and re-installation is needed, it is easier to rebuild foreman VM and re-run the install scripts.
  2. Else verify Foreman GUI https://foreman.example.local
  3. Fine tune puppetserver within Foreman for performance, make the modifications as shown and restart puppetserver, these modifications are:
  • increase ReservedCodeCacheSize from 512m to 1G
  • enable environment-class-cache and multithreading
[root@foreman ~]# diff /etc/sysconfig/puppetserver.orig /etc/sysconfig/puppetserver
9c9
< JAVA_ARGS="-Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -XX:ReservedCodeCacheSize=512m"
---
> JAVA_ARGS="-Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -XX:ReservedCodeCacheSize=1G -Djava.io.tmpdir=/var/tmp"
[root@foreman ~]#
[root@foreman ~]#
[root@foreman ~]#
[root@foreman ~]# diff /etc/puppetlabs/puppetserver/conf.d/puppetserver.conf.orig /etc/puppetlabs/puppetserver/conf.d/puppetserver.conf
71,72c71,72
<     environment-class-cache-enabled: false
<     multithreaded: false
---
>     environment-class-cache-enabled: true
>     multithreaded: true
[root@foreman ~]#
[root@foreman ~]#
[root@foreman ~]#
[root@foreman ~]# systemctl restart puppetserver
[root@foreman ~]# systemctl status puppetserver
● puppetserver.service - puppetserver Service
   Loaded: loaded (/usr/lib/systemd/system/puppetserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-11-25 01:57:32 EST; 7s ago
  Process: 10876 ExecStop=/opt/puppetlabs/server/apps/puppetserver/bin/puppetserver stop (code=exited, status=0/SUCCESS)
  Process: 10991 ExecStart=/opt/puppetlabs/server/apps/puppetserver/bin/puppetserver start (code=exited, status=0/SUCCESS)
 Main PID: 11017 (java)
    Tasks: 42 (limit: 4915)
   Memory: 1.0G
   CGroup: /system.slice/puppetserver.service
           └─11017 /usr/bin/java -Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -XX:ReservedCodeCacheSize=1G -Djava.io.tmpdir=/var/tmp -XX:OnOutOfMemoryError=kill -9 %p -XX:Err>

Nov 25 01:57:14 foreman.example.local systemd[1]: puppetserver.service: Succeeded.
Nov 25 01:57:14 foreman.example.local systemd[1]: Stopped puppetserver Service.
Nov 25 01:57:14 foreman.example.local systemd[1]: Starting puppetserver Service...
Nov 25 01:57:32 foreman.example.local systemd[1]: Started puppetserver Service.
[root@foreman ~]#
  1. It is highly recommended to apply SSL Cert to Foreman Apache Server (httpd). Assuming you have the SSL Cert/Key and CA details ready, you may append the following lines to the foreman_install.sh, and re-run.
--reset-foreman-server-ssl-ca \
--foreman-server-ssl-cert /etc/pki/tls/certs/foreman.example.local.crt \
--foreman-server-ssl-key /etc/pki/tls/private/foreman.example.local.key \
--foreman-server-ssl-chain /etc/pki/tls/certs/gd_bundle-g2-g1.crt \
--puppet-server-foreman-ssl-ca /etc/pki/tls/certs/gd_bundle-g2-g1.crt \
--foreman-proxy-foreman-ssl-ca /etc/pki/tls/certs/gd_bundle-g2-g1.crt

Install Puppet Agent at multiple remote hosts

  1. Login as run_user (gtay) at the controller (centos8) and clone the GIT Repo if it is not already done.
git clone https://github.com/garyttt/freeipa_puppet_foreman.git
cd freeipa_puppet_foreman/ansible
  1. Edit PPM_IP for your use case if applicable, default points to Foreman (.129), it can be Puppet Enterprise (.128)
grep -iR PPM_IP: *
install_puppet_agent.yaml:    PPM_IP: "192.168.159.129"
install_puppet_master.yaml:    PPM_IP: "192.168.159.128"
puppet_agent_re_gen_csr.yaml:    PPM_IP: "192.168.159.129"
  1. Run Ansible for Puppet Agents Install. Login as run_user (gtay)
ansible-playbook -vv -i inventory/hosts -l ppagents install_puppet_agents.yaml -K
  1. If there is failure and re-installation is needed, run at the Puppet Agent end:

Login as root:

dnf remove -y puppet-agent
# or
apt-get remove -y puppet-agent

Centralized Configuration (OS hardening, Audit and Compliance) - Install cis_profile puppet module

  1. Login as root at puppet.example.local and/or foreman.example.local
  2. Follow the instructions, it is as simple as running './install.sh'
git clone https://github.com/garyttt/cis_profile.git
cd cis_profile
./install.sh

After a few minutes, it is done.

Check 'puppet module list' for warnings or errors If for some reason camptocamp-systemd was not installed to latest 3.0.0 level, perform the following clean-up and re-install fix:

# puppet module list
# cd /etc/puppetlabs/code/environments/production/modules
# rm -rf systemd
# puppet module install camptocamp-systemd
# puppet module list

Outputs:

[root@{puppet,foreman} ~]# puppet module list
/etc/puppetlabs/code/environments/production/modules
├── aboe-chrony (v0.3.2)
├── camptocamp-augeas (v1.9.0)
├── camptocamp-kmod (v2.5.0)
├── camptocamp-postfix (v1.12.0)
├── camptocamp-systemd (v3.0.0)
├── fervid-secure_linux_cis (v3.0.0)
├── gtay-cis_profile (v0.1.0)
├── herculesteam-augeasproviders_core (v2.7.0)
├── herculesteam-augeasproviders_grub (v3.2.0)
├── herculesteam-augeasproviders_pam (v2.3.0)
├── herculesteam-augeasproviders_shellvar (v4.1.0)
├── herculesteam-augeasproviders_sysctl (v2.6.2)
├── kemra102-auditd (v2.2.0)
├── puppet-alternatives (v3.0.0)
├── puppet-cron (v2.0.0)
├── puppet-firewalld (v4.4.0)
├── puppet-logrotate (v5.0.0)
├── puppet-nftables (v1.3.0)
├── puppetlabs-augeas_core (v1.2.0)
├── puppetlabs-concat (v7.1.1)
├── puppetlabs-firewall (v2.8.1)
├── puppetlabs-inifile (v5.2.0)
├── puppetlabs-mailalias_core (v1.1.0)
├── puppetlabs-mount_core (v1.1.0)
├── puppetlabs-ntp (v8.5.0)
├── puppetlabs-reboot (v2.4.0)
├── puppetlabs-stdlib (v7.0.0)
└── puppetlabs-translate (v2.2.0)
/etc/puppetlabs/code/modules (no modules installed)
/opt/puppetlabs/puppet/modules
├── puppetlabs-cd4pe_jobs (v1.5.0)
├── puppetlabs-enterprise_tasks (v0.1.0)
├── puppetlabs-facter_task (v1.1.0)
├── puppetlabs-facts (v1.4.0)
├── puppetlabs-package (v2.1.0)
├── puppetlabs-pe_bootstrap (v0.3.0)
├── puppetlabs-pe_concat (v1.1.1)
├── puppetlabs-pe_databases (v2.2.0)
├── puppetlabs-pe_hocon (v2019.0.0)
├── puppetlabs-pe_infrastructure (v2018.1.0)
├── puppetlabs-pe_inifile (v1.1.3)
├── puppetlabs-pe_install (v2018.1.0)
├── puppetlabs-pe_nginx (v2017.1.0)
├── puppetlabs-pe_patch (v0.13.0)
├── puppetlabs-pe_postgresql (v2016.5.0)
├── puppetlabs-pe_puppet_authorization (v2016.2.0)
├── puppetlabs-pe_r10k (v2016.2.0)
├── puppetlabs-pe_repo (v2018.1.0)
├── puppetlabs-pe_staging (v0.3.3)
├── puppetlabs-pe_support_script (v3.0.0)
├── puppetlabs-puppet_conf (v1.2.0)
├── puppetlabs-puppet_enterprise (v2018.1.0)
├── puppetlabs-puppet_metrics_collector (v7.0.5)
├── puppetlabs-python_task_helper (v0.5.0)
├── puppetlabs-reboot (v4.1.0)
├── puppetlabs-ruby_task_helper (v0.6.0)
└── puppetlabs-service (v2.1.0)
[root@{puppet,foreman} ~]#

Reducing Puppet Agent Risks in causing OS crahses and SSH login issues

Many OS hardening Puppet Forge modules would contain rules to harden Firewall (host based likes iptables and nftables) and SSH related system settings (host based likes AllowUsers and AllowGroups in SSH Server Configs), these settings could cause server crashes and user login issues, and therefore it is better to exclude these in the HIERA data/os hierachies of OSNAME based Major_Release yaml files.

Please refer to a Shell script for the said risk mitigation, this script is executed as part of CIS Profile ./install.sh.

If you were to run this script manually one-time, prior to that please make a copy of common.yaml as shown in the script to common.pp.orig.

  • cp -p /etc/puppetlabs/code/environments/production/modules/cis_profile/data/common.yaml /etc/puppetlabs/code/environments/production/modules/cis_profile/data/common.yaml.orig

Reducing Puppet Agent Runtime in scanning for large number of files

Login as root at Puppet Enterprise (puppet) and Foreman (foreman):

For practical reasons we should define EXCLUDES so as to reduce the runtime of Puppet Agent and save system resources.

[root@{puppet,foreman} files]# pwd
/etc/puppetlabs/code/environments/production/modules/secure_linux_cis/files
[root@{puppet,foreman} files]#
[root@{puppet,foreman} files]#
[root@{puppet,foreman} files]# diff ensure_no_ungrouped.sh.orig ensure_no_ungrouped.sh
2c2,9
< df --local -P | awk {'if (NR!=1) print $6'} | xargs -I '{}' find '{}' -xdev -nogroup
---
> # Reasons to exclude:
> # /var/cache/private/fwupdmgr - Ubuntu Firmware Update Manager work files
> # /var/lib/docker/overlay2 - Docker work files
> # /var/lib/kubelet/pods - Kubernetes work files
> # /var/opt/microsoft/omsagent - Azure Linux VM cloud-init work files
> EXCLUDES="^/var/cache/private/fwupdmgr|^/var/lib/docker/overlay2|^/var/lib/kubelet/pods|^/var/opt/microsoft/omsagent"
> df --local -P | awk {'if (NR!=1) print $6'} | xargs -I '{}' find '{}' -xdev -nogroup | egrep -v "$EXCLUDES"
>
[root@{puppet,foreman} files]#
[root@{puppet,foreman} files]#
[root@{puppet,foreman} files]#  diff ensure_no_unowned.sh.orig ensure_no_unowned.sh
2c2,9
< df --local -P | awk {'if (NR!=1) print $6'} | xargs -I '{}' find '{}' -xdev -nouser
---
> # Reasons to exclude:
> # /var/cache/private/fwupdmgr - Ubuntu Firmware Update Manager work files
> # /var/lib/docker/overlay2 - Docker work files
> # /var/lib/kubelet/pods - Kubernetes work files
> # /var/opt/microsoft/omsagent - Azure Linux VM cloud-init work files
> EXLUDES="^/var/cache/private/fwupdmgr|^/var/lib/docker/overlay2|^/var/lib/kubelet/pods|^/var/opt/microsoft/omsagent"
> df --local -P | awk {'if (NR!=1) print $6'} | xargs -I '{}' find '{}' -xdev -nouser | egrep -v "$EXCLUDES"
>
[root@{puppet,foreman} files]#
[root@{puppet,foreman} files]#
[root@{puppet,foreman} files]# diff ensure_no_world_writable.sh.orig ensure_no_world_writable.sh
1c1,9
< df --local -P | awk {'if (NR!=1) print $6'} | xargs -I '{}' find '{}' -xdev -type f -perm -0002
---
> #!/bin/bash
> # Reasons to exclude:
> # /var/cache/private/fwupdmgr - Ubuntu Firmware Update Manager work files
> # /var/lib/docker/overlay2 - Docker work files
> # /var/lib/kubelet/pods - Kubernetes work files
> # /var/opt/microsoft/omsagent - Azure Linux VM cloud-init work files
> EXLUDES="^/var/cache/private/fwupdmgr|^/var/lib/docker/overlay2|^/var/lib/kubelet/pods|^/var/opt/microsoft/omsagent"
> df --local -P | awk {'if (NR!=1) print $6'} | xargs -I '{}' find '{}' -xdev -type f -perm -0002 | egrep -v "$EXCLUDES"
>

Centralized OS Hardening Dashboard - How to create cis_profile host-group in Foreman

Login to Foreman 3.0.1 GUI and refer to the doc:

The doc describes the steps to define Host Group which is a logical grouping for all hosts to be OS Hardened, first we must import and update the 'production' environment puppet classes, then we add 'cis_profile' class to 'cis_profile' Host Group. After which we will select and add all hosts to 'cis_profile' Host Group.

The only Smart Class Parameter needs to be changed is:

  • enforcement_level: from '1' to '2'

Configure FreeIPA LDAP User Authentication for PE and Foreman and Softerra LDAP Browser

Please refer to:

Once the IPA 'ldapread' account has been created, you could also use it at the profile definition of Softerra LDAP Browser 4.5 which is a freeware for Windows desktop that makes IPA LDAP Browsing a walk in the garden.

Definition of 'ipa.example.local' profile (Properties) in Softerra LDAP Browser 4.5:

  • Host: ipa.example.local
  • Port: 389 or 636
  • BaseDN: dc=dev,dc=example,dc=local
  • Use Secure Connection checked if port 636
  • Other Credentials / Mechanisem: Simple
  • Other Credentials / Principal: uid=ldapread,cn=users,cn=accounts,dc=dev,dc=example,dc=local
  • Other Credentials / Password: ********
  • Other Credentials / Save password checked
  • Entry / Filters: (objectClass=*)

Notable Foreman issues due to version upgrade, OS patching, Performance Tuning or others

  1. Foreman puppetserver error '(Error) Cannot determine basic system flavour' post 2.5 to 3.0.1 upgrade.
  • Root Cause: default java.io.tmpdir is /tmp and not having 'exec' file system permission.
  • Fix: backup /etc/sysconfig/puppetserver, append '-Djava.io.tmpdir=/var/tmp' to end of JAVA_ARGS and restart puppetserver.
  • Ref: https://access.redhat.com/solutions/3370091
  1. Post OS Patching and reboot, Foreman Apache httpd server failed to start, it breaks the Foreman GUI.
  • Root Cause: some extra '.conf' files other than 05-* which are foreman specific get added to /etc/httpd/conf as well as /etc/httpd/conf.modules.d folders.
  • Fix: backup, inspect and remove the following files and re-patch OS, reboot and verify Apache httpd server.
/etc/httpd/conf.d/welcome.conf
/etc/httpd/conf.d/userdir.conf
/etc/httpd/conf.d/ssl.conf
/etc/httpd/conf.d/autoindex.conf
/etc/httpd/conf.modules.d/00-*.conf
/etc/httpd/conf.modules.d/01-*.conf
  1. All puppet agents encountered port 8140 connection issues
  • Root Cause: Foreman server was not fine-tuned for performance.
  • Fix: please refer to 'Centralized Configuration - Install Foreman'.
  1. Foreman server OS Patching error 'Problem: package foreman-3.0.1-1.el8.noarch requires rubygem(net-ssh) = 4.2.0, but none of the providers can be installed'.
[root@foreman ~]# dnf upgrade -y
Last metadata expiration check: 2:40:29 ago on Tue 30 Nov 2021 12:47:14 PM +08.
Error:
 Problem: package foreman-3.0.1-1.el8.noarch requires rubygem(net-ssh) = 4.2.0, but none of the providers can be installed
  - cannot install both rubygem-net-ssh-5.1.0-2.el8.noarch and rubygem-net-ssh-4.2.0-3.el8.noarch
  - cannot install both rubygem-net-ssh-4.2.0-3.el8.noarch and rubygem-net-ssh-5.1.0-2.el8.noarch
  - cannot install the best update candidate for package rubygem-net-ssh-4.2.0-3.el8.noarch
  - cannot install the best update candidate for package foreman-3.0.1-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
  • Root Cause: unknown
  • Temp Fix: run 'dnf upgrade -y --nobest'

About


Languages

Language:Shell 100.0%