HDF Automation
Automation to deploy the required middleware and operations tools required by HDF.
Requirements
Ansible Automation
You need to have Ansible and the Kubernetes module installed in your machine in order to run this playbook.
OpenShift
This deployment requires an OpenShift environment. You will also need an OpenShift user with cluster-admin
privileges.
Note
|
This automation was specifically tested with OCP 4.10. Older or newer versions may have API incompatibilities or different Operator Catalog that will break the automation. |
Resource Consumption
Most of the components deployed use bare minimum resources or do not force resource requirements. Those can be adjusted by changing parameters or templates used.
Storage
Some components require RWO volumes to work properly. Here are the default values.
Component | Default Value | Comment |
---|---|---|
Logging |
10Gi |
|
Kafka |
9Gi |
3Gi per replica |
Zookeeper |
9Gi |
3Gi per replica |
Tekton Shared Storage |
4Gi |
|
Total |
32Gi |
CPU and Memory limits
Note
|
TBD |
Tools available
-
OpenShift User Workload Monitoring
-
AMQ Streams Operator
-
Grafana Operator
-
OpenShift GitOps
-
OpenShift Pipelines
-
OpenShift Logging
How to Use the Automation
Command-line Required Parameters
You can change the value of variables defined in the roles and playbook as you like to customize your deployment, but in order to access your OpenShift cluster, you need to pass the following required values as command-line properties
Parameter | Example Value | Definition |
---|---|---|
token |
sha256~vFanQbthlPKfsaldJT3bdLXIyEkd7ypO_XPygY1DNtQ |
Access token of an user with cluster-admin privileges |
server |
OpenShift cluster API URL |
|
docker_config |
vFanQbthlPKfsaldJT3bdLXIyEkd7ypO_XPygY1DNtQ |
|
Deploying the Environment
Export token
, server
and docker_config
as environment variable, then run the following command under the ansible
folder:
ansible-playbook -e token=${token} -e server=${server} -e docker_config=${docker_config} playbook.yml