oto-mp / otp-gitops

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

One Touch Provisioning across Multi-Cloud

Elevator Pitch

This method/pattern is our opinionated implementation of the GitOps principles, using the latest and greatest tooling available, to enable the hitting of one big red button (figuratively) to start provisioning a platform that provides Cluster and Virtual Machine Provisioning capabilities, Governance and policy management, observability of Clusters and workloads and finally deployment of applications, such as IBM Cloud Paks, all within a single command*.

  • Codified, Repeatable and Auditable.

OTP *Disclaimer, may actually be more than just one command to type. πŸ˜‰

The method/pattern is not intended to be used straight into Production, and a lot of assumptions have been made when putting this together. It's main intention is to show the Art of the Possible, but it can be used as base to roll your own.

Whilst all efforts have been made to provide a complete One Touch Provisioning method/pattern, it may not suit every environment and your mileage may vary.

Shout outs πŸ“£

This asset has been built on the shoulders of giants and leverages the great work and effort undertaken by the Cloud Native Toolkit - GitOps Production Deployment Guide, IBM Garage TSA and Red Hat Communities of Practice teams. Without these efforts, then this asset would have struggled to get off the ground.

The reference architecture for this GitOps workflow can be found here.

Table of contents

Note βœ‹

This repository provides an opinionated point of view on how tooling and principles such as Terraform, Ansible and GitOps can be used to manage the infrastructure, services and application layers of OpenShift/Kubernetes based systems. It takes into account the various personas interacting with the system and accounts for separation of duties.

It is assumed that you have already configured the compute, networks, storage, Security Groups, Firewalls, VPC, etc to enable the platform to be deployed. The asset will not perform those actions for you, and it will fail if you attempt to deploy it without those all pre-configured.

This respository is not intended to be a Step-by-Step Guide and some prior knowledge in OpenShift/Kubernetes/VM Provisioning is expected.

Pattern Capabilities πŸš€

  • The pattern will deploy an opionated OpenShift Advanced Cluster Management - Hub running OpenShift GitOps (ArgoCD), OpenShift Pipelines (Tekton), OpenShift Data Foundation (Rook.io), Ansible Automation Platform (Additional Subscription required), Red Hat Advanced Cluster Management (Open Cluster Management), Red Hat Advanced Cluster Security (Stackrox), Quay Registry, Quay Container Security, OpenShift Virtualisation (KubeVirt), IBM Infrastructure Automation from the IBM Cloud Pak for AIOps 3.2, SealedSecrets, Instana, Turbonomics and RHACM Observability.

  • Deployment and management of Managed OpenShift Clusters via OpenShift GitOps (everything is Infrastructure as Code) onto Amazon Web Services, Microsoft Azure, IBM Cloud, Google Cloud Platform, VMWare vSphere and Bare-metal environments, including Single Node OpenShift onto On Premise hosts. Allowing Managed OpenShift Clusters to be treated as "Cattle" not "Pets".

  • Deployed Managed OpenShift Clusters on AWS, Azure and GCP can be Hibernated when not in-use to reduce the amount of resources consumed on your provider, potentially lowering costs.

  • Configured to Auto-Discover OpenShift Clusters from provided Red Hat OpenShift Cluster Manager credentials, and provide the opportunity to import the OpenShift clusters as Managed Clusters and automatically configure them into the OpenShift GitOps Cluster.

  • Centralised OpenShift GitOps for deployment of Applications across any Managed OpenShift Cluster. View all deployed Applications across the entire fleet of OpenShift Clusters, regardless of Clusters location (i.e. AWS, GCP, on-premise etc).

  • Automatically apply policies and governance to ALL Clusters within Red Hat Advanced Cluster Management, regardless of Clusters location.

  • Hub Cluster can self-host Instana Virtual Machine using OpenShift Virtualisation and managed via OpenShift GitOps, or automatically deployed to an IaaS environment using IBM Infrastructure Automation.

  • Can be configured to automatically connect to IaaS environments, enable deployment of Virtual Machines via IBM Infrastructure Automation and OpenShift Pipelines.

  • Can be configured to automatically deploy applications to Managed Clusters via OpenShift GitOps. An example provided will deploy IBM Cloud Pak for Integration (utilising full GitOps Principles) to Managed Clusters.

Coming Soon

  • Zero Touch Provisioning of Managed OpenShift Clusters to Bare-metal nodes (think Edge deployments)

Red Hat Advanced Cluster Management Hub and Spoke Clusters Concept

We leverage two Open Source technologies to underpin the functionality within this pattern. One is ArgoCD (aka OpenShift Gitops) and the other is Open Cluster Management (aka Red Hat Advanced Cluster Management or RHACM).

To fully appreciate the terminology used within RHACM and more so within the pattern, we will spend a few moments helping provide some context.

RHACM is built around a Hub and Spoke architecture. With the Hub Cluster, running RHACM, providing cluster and application lifecycle along with other aspects such as Governance and Observability of any Spoke Clusters under it's management.

The diagram below shows a typical Hub and Spoke deployment over various clouds.

Hub and Spoke

Overview of Git Repositories

We leverage several repositories to make up the pattern. These may seem as overwhelming to begin with, but there is some method and thoughts behind them. We approached this pattern with scale in mind and running a single mono repository with all the manifests quickly showed that it does not lend itself to scale that well.

We really want a method that allows a decentralised approach to scale. One where teams are working together across the entire pattern, with some guard-rails, to enable rapid deployment of OpenShift Clusters, Applications and Policies at scale.

Taking the Kubernetes Ownership Model, we looked at which personas would typically contribute and have ownership over a repository and separated a single mono repository into several to reflect that. An example would be a Platform team that primarily contributes and has ownership over a repository that defines the infrastructure-related components of a Kubernetes Cluster, e.g. namespaces, machinesets, ingress-controllers, storage etc, they may also be best placed to contribute and own a repository that defines how OpenShift Clusters are created on different Cloud Providers. Similar examples can be given for a set of Services which support Application developers, where we would separate these into their own repositories, again owned and primarily contributed by a Services team. A Risk/Security team owning and primarily contributing to a Policies repository is another example.

We then look to enable all these repositories as centralised repositories, either at an organisational, business or product level, where each OpenShift Cluster, including the Hub Cluster, is deployed with OpenShift GitOps (aka ArgoCD) and bootstrapped via a single repository, within which hold ArgoCD Applications that point back to these centralised repositories.

The advantages of this approach is a reduction of duplicated code and ensures that deployed OpenShift Clusters all meet or share the same configuration, where applicable. For example node sizes, autoscaling, networking etc or RBAC policies that are important for overall governance and security. As a result, the desired configurations are fully replicated across the Clusters, regardless of where they land, Public, Private, On-Prem etc.

Manually identifying drift and maintaining conformance across different clusters within different Clouds as they scale is, of course, not a viable alternative, so this approach lends itself very well.

By utilising a shared and decentralised approach, this has the added advantage of lowering the barrier to entry (e.g., a Developer needs to understand how we are deploying a cluster, they can read the Git Repository without needing to delay the Platform team), lowering the cost for change and Opening up Opportunities to Innovate.

For our pattern, we've termed the above 1 + 5 + n Git Repositories.

  • 1 Repository being the Red Hat Advanced Cluster Management Hub Cluster

  • 5 Repositories (Infrastructure, Services, Applications, Clusters, Policies) being common / shared

  • n Repository being the repository that you will use to bootstrap your deployed managed OpenShift Cluster

1+5+n Repositories

By using a common set of repositories we can quickly scale out Cluster Deployments and reducing the risk of misconfiguration and drift.

Use-cases for different Git Repository organisation

As we mature this method/pattern, we have seen different use-cases where the need for a different Git Repository organisation has been required.

Our view by leveraging the 1 + 5 + n Git Repositories it allows more flexability to what is deployed into cluster and works better at scale. The Line of Business, Product team, end-users etc can have full control via their own ArgoCD instance which is configured against a Git repository they control. This works for privacy and security as they control who can and cannot see the objects within their repository. We'll term this a self-managed Cluster. This may suit a team which has experience in OpenShift, clearly understand their requirements, and are comfortable managing the environment themselves.

However there are occassions where there maybe a requirement for a Cluster to be provisioned and the end-users wish for a more managed cluster. They understand their applications, and prefer to just focus on those aspects. They are happy to commit to a Git Repository that they do not own and prefer that management of the Cluster is owned by another team. In this scenario, it would make sense to manage the cluster and its applications via the Hub Repository alone. Think of this as a shared-multi-tenancy operational model. Everyone with access to the Git repository can see all objects within. To demo this model, we've left example folder structures for the managed use-case. This can be found in 0-bootstrap/spokeclusters/.

Note

It should be noted that these are just a few methods to manage your environments, and we encourage you to choose a method that works for you.

Pre-requisites ⬅️

Red Hat OpenShift cluster β­•

Minimum OpenShift v4.8+ is required.

Firstly, build a "bare-bones" Red Hat OpenShift cluster using either IPI (Installer Provisioned Infrastructure), UPI (User Provisioned Infrastructure) methods or a Managed OpenShift offering like AWS ROSA, Azure ARO, IBM Cloud - ROKS.

IPI Methods

UPI Methods

Leveraging the work undertaken by the Cloud Native Toolkit team, you can utilise the following Github repositories to assist you with your UPI install of OpenShift.

Managed OpenShift

CLI tools πŸ’»

  • Install the OpenShift CLI oc (version 4.9+) . The binary can be downloaded from the Help menu from the OpenShift Console.

    Download oc cli

    oc cli

  • Install helm and kubeseal from brew.sh

    brew install kubeseal && brew install helm
  • Log in from a terminal window.

    oc login --token=<token> --server=<server>

IBM Entitlement Key for IBM Cloud Paks πŸ”‘

  • An IBM Entitlement Key is required to pull IBM Cloud Pak specific container images from the IBM Entitled Registry.

To get an entitlement key:

  1. Log in to MyIBM Container Software Library with an IBMid and password associated with the entitled software.
  2. Select the View library option to verify your entitlement(s).
  3. Select the Get entitlement key to retrieve the key.
  • Create a Secret containing the entitlement key within the ibm-infra-automation namespace.

    oc new-project ibm-infra-automation || true
    oc create secret docker-registry ibm-entitlement-key -n ibm-infra-automation \
    --docker-username=cp \
    --docker-password="<entitlement_key>" \
    --docker-server=cp.icr.io \
    --docker-email=myemail@ibm.com

Setup git repositories

  1. Create a new GitHub Organization using instructions from this GitHub documentation.

  2. From each template repository, click the Use this template button and create a copy of the repository in your new GitHub Organization. Note: Make sure the repositories are public so that ArgoCD can access them.

    Create repository from a template

  3. (Optional) OpenShift GitOps can leverage GitHub tokens. Many users may wish to use private Git repositories on GitHub to store their manifests, rather than leaving them publically readable. The following steps will need to repeated for each repository.

    • Generate GitHub Token

      • Visit https://github.com/settings/tokens and select "Generate new token". Give your token a name, an expiration date and select the scope. The token will need to have repo access.

        Create a GitHub Secret

      • Click on "Generate token" and copy your token! You will not get another chance to copy your token and you will need to regenerate if you missed to opportunity.

    • Generate OpenShift GitOps Namespace

      oc apply -f setup/setup/0_openshift-gitops-namespace.yaml
    • Generate Secret

      • export the GitHub token you copied earlier.

        $ export GITHUB_TOKEN=<insert github token>
        $ export GIT_ORG=<git organisation>
      • Create a secret that will reside within the openshift-gitops namespace.

        $ mkdir repo-secrets
        $ cat <<EOF > setup/ocp/repo-secrets/otp-gitops-repo-secret.yaml
        apiVersion: v1
        kind: Secret
        metadata:
          name: otp-gitops-repo-secret
          namespace: openshift-gitops
          labels:
            argocd.argoproj.io/secret-type: repository
        stringData:
          url: https://github.com/${GIT_ORG}/otp-gitops
          password: ${GITHUB_TOKEN}
          username: not-used
        EOF
      • Repeat the above steps for otp-gitops-infra, otp-gitops-services, otp-gitops-apps, otp-gitops-clusters and otp-gitops-policies repositories.

    • Apply Secrets to the OpenShift Cluster

      oc apply -f setup/ocp/repo-secrets/
      rm -rf setup/ocp/repo-secrets
  4. Clone the repositories locally.

    mkdir -p <gitops-repos>
    cd <gitops-repos>
    
    # Example: set default Git org for clone commands below
    GIT_ORG=one-touch-provisioning
    
    # Clone using SSH
    git clone git@github.com:$GIT_ORG/otp-gitops.git
    git clone git@github.com:$GIT_ORG/otp-gitops-infra.git
    git clone git@github.com:$GIT_ORG/otp-gitops-services.git
    git clone git@github.com:$GIT_ORG/otp-gitops-apps.git
    git clone git@github.com:$GIT_ORG/otp-gitops-clusters.git
    git clone git@github.com:$GIT_ORG/otp-gitops-policies.git
  5. Update the default Git URl and branch references in your otp-gitops repository by running the provided script ./scripts/set-git-source.sh script.

    cd otp-gitops
    GIT_ORG=<GIT_ORG> GIT_BRANCH=master ./scripts/set-git-source.sh
    git add .
    git commit -m "Update Git URl and branch references"
    git push origin master

Install and configure OpenShift GitOps

  1. Install the OpenShift GitOps Operator and create a ClusterRole and ClusterRoleBinding.

    cd setup
    oc apply -f setup
    while ! oc wait crd applications.argoproj.io --timeout=-1s --for=condition=Established  2>/dev/null; do sleep 30; done
    while ! oc wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n openshift-gitops > /dev/null; do sleep 30; done
  2. Create a custom ArgoCD instance with custom checks. To customise which health checks, comment out those you don't need in setup/argocd-instance/kustomization.yaml.

    oc apply -k argocd-instance
    while ! oc wait pod --timeout=-1s --for=condition=ContainersReady -l app.kubernetes.io/name=openshift-gitops-cntk-server -n openshift-gitops > /dev/null; do sleep 30; done

    Note: We use a custom openshift-gitops-repo-server image to enable the use of Plugins within OpenShift Gitops. This is required to allow RHACM to utilise the Policy Generator plugin. The Dockerfile can be found here: https://github.com/one-touch-provisioning/otp-custom-argocd-repo-server.

  3. If using IBM Cloud ROKS as a Hub Cluster, you will need to configure TLS.

    scripts/patch-argocd-tls.sh
  4. (Optional) Create a console link to OpenShift GitOps

    export ROUTE_NAME=openshift-gitops-cntk-server
    export ROUTE_NAMESPACE=openshift-gitops
    export CONSOLE_LINK_URL="https://$(oc get route $ROUTE_NAME -o=jsonpath='{.spec.host}' -n $ROUTE_NAMESPACE)"
    envsubst < <(cat setup/4_consolelink.yaml.envsubst) | oc apply -f -

Configure manifests for Infrastructure

If you are running a managed OpenShift cluster on IBM Cloud, you can deploy OpenShift Data Foundation as an add-on.

On AWS, Azure, GCP and vSphere run the following script to configure the machinesets, infra nodes and storage definitions for the Cloud you are using for the Hub Cluster

./scripts/infra-mod.sh

Bootstrap the OpenShift cluster πŸ₯Ύ

  1. Retrieve the ArgoCD/GitOps URL and admin password and log into the UI

    oc get route -n openshift-gitops openshift-gitops-cntk-server -o template --template='https://{{.spec.host}}'
    
    # Passsword is not needed if Log In via OpenShift is used (default)
    oc extract secrets/openshift-gitops-cntk-cluster --keys=admin.password -n openshift-gitops --to=-
  2. The resources required to be deployed for this asset have been pre-selected, and you should just need to clone the otp-gitops repository in your Git Organization if you have not already done so. However, you can review and modify the resources deployed by editing the following.

    0-bootstrap/hub/1-infra/kustomization.yaml
    0-bootstrap/hub/2-services/kustomization.yaml
    

If you choose to disable any Infrastructure or Services resources before the Initial Bootstrap, you will need to re-commit those changes back to your Git repository, otherwise they will not be picked up by OpenShift GitOps.

  1. Deploy the OpenShift GitOps Bootstrap Application.

    oc apply -f 0-bootstrap/hub/bootstrap.yaml
  2. Its recommended to deploy the Infrastructure components, then the Service Componenets once complete. ArgoCD Sync waves are used to managed the order of manifest deployments, but we have seen occassions where applying both the Infrastructure and Services layers at the same time can fail. YMMV.

Once the Infrastructure layer has been deployed, update the 0-bootstrap/hub/kustomization.yaml manifest to enable the Services layer and commit to Git. OpenShift GitOps will then automatically deploy the Services.

resources:
- 1-infra/1-infra.yaml
## Uncomment 2-services/2-services.yaml once
## 1-infra/1-infra.yaml has been completed
- 2-services/2-services.yaml
## Uncomment to deploy Clusters and Applications
## Must be done after all steps for 1-infra & 2-services
## have been completed.
# - 3-clusters/3-clusters.yaml
# - 4-apps/4-apps.yaml
# - 5-policies/5-policies.yaml

Credentials

After Infrastructure Automation and RHACM have been installed successfully - all apps are synced in ArgoCD.

Infrastructure Automation

The route to IBM Infrastructure Automation is

oc -n ibm-common-services get route cp-console --template '{{.spec.host}}'

To use Infrastructure Automation, use the following users with the password Passw0rd.

POD=$(oc -n ldap get pod -l app=ldap -o jsonpath="{.items[0].metadata.name}")
oc -n ldap exec $POD -- ldapsearch -LLL -x -H ldap:// -D "cn=admin,dc=ibm,dc=com" -w Passw0rd -b "dc=ibm,dc=com" "(memberOf=cn=operations,ou=groups,dc=ibm,dc=com)" dn

The default admin password is:

oc -n ibm-common-services get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d

Red Hat Advanced Cluster Management

The route to Red Hat Advanced Cluster Management is

oc -n open-cluster-management get route multicloud-console --template '{{.spec.host}}'

Managing OpenShift Clusters via OpenShift GitOps

Within this asset we treat Managed Clusters as OpenShift GitOps Applications. This allows us to Create, Destroy, Hibernate and Import Managed Clusters into Red Hat Advanced Cluster Manmagement via OpenShift GitOps.

Creating and Destroying Managed OpenShift Clusters

We've now simplied the life-cycling of OpenShift Clusters on AWS, Google Cloud and Azure via the use of Cluster Pools and ClusterClaims.

Cluster Pools allows you pre-set a common cluster configuration and RHACM will take that configuration and apply it to each Cluster it deploys from that Cluster Pool. An example could be that a Production Cluster may consume specific Compute resources, exist in a multi-zone configuration and requires a particular version of OpenShift to be deployed and RHACM will deploy a cluster to meet those requirements.

Once a Cluster Pool has been created, you can submit ClusterClaims to deploy a cluster from that pool.

We have retained the ability to deploy clusters outside of Cluster Pools and updated these methods to support External Secrets Operator.

Review the Clusters layer kustomization.yaml to enable/disable the Clusters that will be deployed via OpenShift GitOps.

resources:
## ClusterPools - Seperated by Env and Cloud
- argocd/clusterpools/cicd/aws/aws-cicd-pool/aws-cicd-pool.yaml
- argocd/clusterpools/cicd/azure/azure-cicd-pool/azure-cicd-pool.yaml

#- argocd/clusterpools/dev/aws/aws-dev-pool/aws-dev-pool.yaml

#- argocd/clusterpools/test/aws/aws-test-pool/aws-test-pool.yaml

#- argocd/clusterpools/prod/aws/aws-prod-pool/aws-prod-pool.yaml 

## Deploy Clusters

## ClusterClaims - Seperated by Env and Cloud
- argocd/clusterclaims/dev/aws/project-simple.yaml
#- argocd/clusterclaims/prod/aws/project-simple.yaml
#- argocd/clusterclaims/cicd/aws/project-cicd.yaml
#- argocd/clusterclaims/test/aws/project-easy.yaml

## ClusterDeployments - Seperated by Env and Cloud

### AWS
#- argocd/clusters/cicd/aws/aws-cicd/aws-cicd.yaml
#- argocd/clusters/dev/aws/aws-dev/aws-dev.yaml
- argocd/clusters/prod/aws/aws-prod/aws-prod.yaml

### Azure
#- argocd/clusters/cicd/azure/azure-cicd/azure-cicd.yaml
- argocd/clusters/prod/azure/azure-prod/azure-prod.yaml 
  • We have have provided examples for deploying new clusters into AWS, Azure, IBM Cloud and VMWare. Cluster Deployments require the use of your Cloud Provider API Keys to allow RHACM to connect to your Cloud Provider and deploy via Terraform an OpenShift cluster. We originally utilised the SealedSecrets Controller to encrypt your API Keys and provided a handy script for each Cloud Provider within the Clusters repository, under clusters/deploy/sealed-secrets/<cloud provider> for your use.

  • More recently we have updated the pattern to allow the use of an external keystore, e.g. Vault and leveraged the use of the External Secrets Operator to pull in the Cloud Providers API keys automatically. This has allowed us to simplify the creation of new clusters and reduce the values needed. The new method is stored within the Clusters repository, under clusters/deploy/external-secrets/<cloud provider>

Auto-discovery and import of existing OpenShift Clusters

  • Red Hat Advanced Cluster Management 2.5 makes the use of the Discovery Service, that will auto-discover and import OpenShift Clusters configured within your RHOCM account. You can still perform this action outside of the Discovery Service, but this does mean that manual steps are required.
resources:
## Discover & Import Existing Clusters
 - argocd/clusters/discover/discover-openshift.yaml

Hibernating Managed OpenShift Clusters

  • You can Hibernate deployed Managed OpenShift Clusters running on AWS, Azure and GCP when not in use to reduce on running costs. This has to be done AFTER a cluster has been deployed. This is accomplished by modifying the spec.powerState from Running to Hibernating of the ClusterDeployment manifest (Located under otp-gitops-clusters repo/clusters/deploy/<aws|azure|gcp>/<cluster-name>/templates/clusterdeployment.yaml) of the Managed OpenShift Cluster and committing to Git.
spec:
  powerState: Hibernating
  • To resume a hibernating Managed OpenShift Cluster, you modify the spec.powerState value from Hibernating to Running and commit to Git.

Managing IaaS Providers within IBM Infrastructure Automation

  • Details to follow.

Deployment of Cloud Paks through OpenShift GitOps

We will use IBM Cloud Pak for Integration (CP4i) as an example Application that can be deployed onto your Managed Clusters. As mentioned previously, we re-use the GitOps approach, and utilise OpenShift GitOps to configure the cluster ready for CP4i, through deploying OpenShift Container Storage, creating Nodes with the correct CPU and Memory, installing the necessary services and tools, and finally, deploy CP4i with MQ and ACE.

There is a few minor manual steps which need to be completed, and that is preparing the CP4i respository with your Cloud details and adding the IBM Cloud Pak Entitlement Secret into the Managed Cluster. In future, we aim to automate this step via SealedSecrets, Vault and Ansible Tower.

We will use the Cloud Native Toolkit - GitOps Production Deployment Guide repositories and it is assumed you have already configured these repostories by following the very comprehensive guide put together by that team. That configuration of those repositories are beyond the scope of this asset.

# Log into Managed Cluster that the CP4i will be deployed too via oc login
oc login --token=<token> --server=<server> 

# Clone the multi-tenancy-gitops repository you configured via the Cloud Native Toolkit - GitOps Production Deployment Guide
git clone git@github.com:cp4i-cloudpak/multi-tenancy-gitops.git

cd multi-tenancy-gitops
# Run the infra-mod.sh script to configure the Infrastruture details of the Managed Cluster
./scripts/infra-mod.sh

# Create an IBM Entitlement Secret within the tools namespace
   
## To get an entitlement key:
## 1. Log in to https://myibm.ibm.com/products-services/containerlibrary with an IBMid and password associated with the entitled software.  
## 2. Select the **View library** option to verify your entitlement(s). 
## 3. Select the **Get entitlement key** to retrieve the key.

oc new-project tools || true
oc create secret docker-registry ibm-entitlement-key -n tools \
--docker-username=cp \
--docker-password="<entitlement_key>" \
--docker-server=cp.icr.io

Note: Our aim is to reduce these steps in future releases of the asset.

You will need to update the tenents/cloudpaks/cp4i/cp4i-placement-rule.yaml file within the otp-gitops-apps repository to match the cluster you wish to deploy the Cloud Pak to and commit to Git.

apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
  name: ibm-cp4i-argocd
  namespace: openshift-gitops
  labels:
    app: ibm-cp4i-argocd
spec:
  clusterConditions:
  - status: "True"
    type: ManagedClusterConditionAvailable
  clusterSelector:
    matchExpressions: []
    matchLabels:
      # Replace value with Cluster you wish to provision too.
      name: aws-ireland

Uncomment the CP4i Application within Application kustomization.yaml file and commit to Git.

resources:
# Deploy Applications to Managed Clusters
## Include the Applications you wish to deploy below
## An example has been provided
 - argocd/cloudpaks/cp4i/cp4i.yaml

OpenShift GitOps will create a RHACM Application that subscribes to the multi-tenancy-gitops repository you configured and apply the manifests to the Managed Cluster's OpenShift-GitOps controller.

About

License:Apache License 2.0


Languages

Language:Shell 100.0%