fperearodriguez / workshop-gitops-infra-deploy

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

workshop-gitops-infra-deploy

Create cluster

⚠️ First of all, create a ./pullsecret.txt containing the pull secret to be used.

This script deploy OCP both hub and SNO managed on AWS. You must specify the following params:

sh ocp4-install.sh <cluster_name> <region_aws> <base_domain> <replicas_master> <replicas_worker> <vpc_id|false> <aws_id> <aws_secret> <ocp_version|null>

VPC id is required only if you are deploying on an existing VPC, otherwise specify "false". OCP version is not a required input value either, you can skip it if you want to install the latest version.

sh ocp4-install.sh argo-hub eu-central-1 <base_domain> 3 3 false <aws_id> <aws_secret> 

For deploying a SNO managed cluster:

sh ocp4-install.sh sno-1 eu-central-1 <base_domain> 1 0 <vpc_id> <aws_id> <aws_secret> 

⚠️ It is recommended to name hub and sno clusters as argo-hub and sno-x

You can check your VPC id on AWS console or by running this command:

aws ec2 describe-vpcs 

Deploy and configure ArgoCD

⚠️ You need to install argocd CLI and yq.

This script installs GitOps operator, deploy ArgoCD instance and add managed clusters. You must specify the amount of deployed SNO clusters to be managed by argocd:

sh deploy-gitops.sh <amount_of_sno_clusters>

For example, if you want to add 3 sno cluster (sno-1, sno-2 and sno-3):

sh deploy-gitops.sh 3

This script configures argo RBAC so users created in hub cluster for sno managed cluster (user-1, user-2...) can only view project-sno-x and destination sno-x clusters hence only deploying to the allowed destination within the allowed project.

Deploy keycloak

To deploy an instance of keycloak and create the corresponding realms, client and users, run this script:

sh set-up-keycloak.sh <number_of_clusters> <subdomain>

Beware you need to update your certificate on your helm charts repo:

oc -n openshift-ingress-operator get secret router-ca -o jsonpath="{ .data.tls\.crt }" | base64 -d -i 

Deploy FreeIPA

Follow the instructions here to deploy FreeIPA server.

git clone https://github.com/redhat-cop/helm-charts.git

cd helm-charts/charts
helm dep up ipa

helm upgrade --install ipa . --namespace=ipa --create-namespace --set app_domain=apps.<domain>

You have to wait for IPA to be fully deployed to run this commands, verify ipa-1-deploy pod is completed.

Then, expose ipa service as NodePort and allow external traffic on AWS by configuring the security groups.

oc expose service ipa  --type=NodePort --name=ipa-nodeport --generator="service/v2" -n ipa

Create FreeIPA users

To create FreeIPA users, run these commands:

# Login to kerberos
oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd123 | /usr/bin/kinit admin"
    
# Create groups if they dont exist

oc exec -it dc/ipa -n ipa -- \
    sh -c "ipa group-add student --desc 'wrapper group' || true && \
    ipa group-add ocp_admins --desc 'admin openshift group' || true && \
    ipa group-add ocp_devs --desc 'edit openshift group' || true && \
    ipa group-add ocp_viewers --desc 'view openshift group' || true && \
    ipa group-add-member student --groups=ocp_admins --groups=ocp_devs --groups=ocp_viewers || true"

# Add demo users

oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd | \
    ipa user-add paul --first=paul \
    --last=ipa --email=paulipa@redhatlabs.dev --password || true && \
    ipa group-add-member ocp_admins --users=paul"

oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd | \
    ipa user-add henry --first=henry \
    --last=ipa --email=henryipa@redhatlabs.dev --password || true && \
    ipa group-add-member ocp_devs --users=henry"

oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd | \
    ipa user-add mark --first=mark \
    --last=ipa --email=markipa@redhatlabs.dev --password || true && \
    ipa group-add-member ocp_viewers --users=mark"

Deploy vault server

To deploy an instance of vault server:

git clone https://github.com/hashicorp/vault-helm.git

helm repo add hashicorp https://helm.releases.hashicorp.com

oc new-project vault

helm install vault hashicorp/vault \
    --set "global.openshift=true" \
    --set "server.dev.enabled=true" --values values.openshift.yaml
    
oc expose svc vault -n vault -n vault

Then you must expose vault server so it can be reached from SNO clusters.

Once server is deployed and argo-vault-plugin working on SNO, you must configure vault server auth so argo can authenticate against it.

Follow this instructions here.

# enable kv-v2 engine in Vault
oc exec vault-0 -- vault secrets enable kv-v2

# create kv-v2 secret with two keys # Put your secrets here
oc exec vault-0 -- vault kv put kv-v2/demo ldap_bind_password="Passw0rd"

oc exec vault-0 -- vault kv get kv-v2/demo

# create policy to enable reading above secret
vault policy write demo - <<EOF # Replace with your app name
path "kv-v2/data/demo" {
  capabilities = ["read"]
}
EOF

vault auth enable approle

vault write auth/approle/role/argocd secret_id_ttl=120h token_num_uses=1000 token_ttl=120h token_max_ttl=120h secret_id_num_uses=4000  token_policies=demo

vault read auth/approle/role/argocd/role-id

vault write -f auth/approle/role/argocd/secret-id

Destroy cluster

If you want to delete a cluster, first run this command to destroy it from AWS:

CLUSTER_NAME=<cluster_name>
openshift-install destroy cluster --dir install/install-dir-$CLUSTER_NAME --log-level info

Then remove it from ArgoCD instance:

# Make sure you are logged in cluster hub, unless you are trying to delete this cluster that this section is not required
export KUBECONFIG=./install/install-dir-argo-hub/auth/kubeconfig
# Login to argo server
ARGO_SERVER=$(oc get route -n openshift-operators argocd-server  -o jsonpath='{.spec.host}')
ADMIN_PASSWORD=$(oc get secret argocd-cluster -n openshift-operators  -o jsonpath='{.data.admin\.password}' | base64 -d)
# Remove managed cluster
argocd login $ARGO_SERVER --username admin --password $ADMIN_PASSWORD --insecure
argocd cluster rm $CLUSTER_NAME
# Then remove installation directories
rm -rf ./backup/backup-$CLUSTER_NAME
rm -rf ./install/install-dir-$CLUSTER_NAME

About


Languages

Language:Shell 100.0%