grdryn / kas-installer

kas-installer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

kas-installer

KAS Installer allows the deployment and configuration of Managed Kafka Service in a single K8s cluster.

Prerequisites

  • jq
  • curl
  • OpenShift. In the future there are plans to make it compatible with native K8s. Currently an OpenShift dedicated based environment is needed (Currently needs to be a multi-zone cluster if you want to create a Kafka instance through the fleet manager by using managed_kafka.sh).
  • git
  • oc
  • kubectl
  • openssl CLI tool
  • rhoas CLI (https://github.com/redhat-developer/app-services-cli)
  • A user with administrative privileges in the OpenShift cluster
  • brew coreutils (Mac only)
  • OSD Cluster with the following specs:
    • 3 compute nodes
    • Size: m5.4xlarge
    • MultiAz: True

Description

KAS Installer deploys and configures the following components that are part of Managed Kafka Service:

  • MAS SSO
  • Observability Operator
  • sharded-nlb IngressController
  • KAS Fleet Manager
  • KAS Fleet Shard and Strimzi Operators

It deploys and configures the components to the cluster set in the user's kubeconfig file.

Additionally, a single Data Plane cluster is configured ready to be used, in the same cluster set in the user's kubeconfig file.

Usage

Deploy Managed Kafka Service

  1. Create and fill the KAS installer configuration file kas-installer.env. An example of the needed values can be found in the kas-installer.env.example file
  2. Run the KAS installer kas-installer.sh to deploy and configure Managed Kafka Service
  3. Run uninstall.sh to remove KAS from the cluster. You should remove any deployed Kafkas before runnig this script.

NOTE: Installer uses predefined bundle for installing Strimzi Operator, to use a different bundle you'll need to build a dev bundle and update STRIMZI_OPERATOR_BUNDLE_IMAGE environment variable.


Using rhoas CLI

Use ./rhoas_login.sh as a short cut to login to the CLI. Login using the username you specified as RH_USERNAME in the env file. The password is the same as the RH_USERNAME value.

There are a couple of things that are expected not to work when using the RHOAS CLI with a kas-installer installed instance. These are noted below.

Service Account Maintenace

  1. To create an account, run rhoas service-account create --short-description foo --file-format properties.
  2. To list existing service accounts, run rhoas service-account list.
  3. To remove an existing service account, run rhoas service-account delete --id=<ID of service account>.

Kafka Instance Maintenance

  1. To create a cluster, run rhoas kafka create --bypass-terms-check --provider aws --region us-east-1 --name <clustername>. Note that --bypass-terms-check is required as the T&Cs endpoint will not exist in your environment. The provider and region must be passed on the command line.
  2. To list existing clusters, run rhoas kafka list
  3. To remove an existing cluster, run rhoas kafka delete --name <clustername>.

Note: that managing ACLs via rhoas cli does not work yet (in kas-installer admin-server currently runs over plain).

Legacy scripts

Please favour using the rhoas command line. These scripts will be remove at some point soon.

Service Account Maintenance

The service_account.sh script supports creating, listing, and deleting service accounts.

  1. To create an account, run service_account.sh --create. The new service account information will be printed to the console. Be sure to retain the clientID and clientSecret values to use when generating an access token or for connecting to Kafka directly.
  2. To list existing service accounts, run service_account.sh --list.
  3. To remove an existing service account, run service_account.sh --delete <ID of service account>.

Generate an Access Token

  1. Run get_access_token.sh using the clientID and clientSecret as the first and second arguments. The generated access token and its expiration date and time will be printed to the console.

Kafka Instance Maintenance

The managed_kafka.sh script supports creating, listing, and deleting Kafka clusters.

  1. To create a cluster, run managed_kafka.sh --create <cluster name>. Progress will be printed as the cluster is prepared and provisioned.
  2. To list existing clusters, run managed_kafka.sh --list.
  3. To remove an existing cluster, run managed_kafka.sh --delete <cluster ID>.
  4. To patch an existing cluster (for instance changing a strimzi version), run managed_kafka.sh --admin --patch <cluster ID> '{ "strimzi_version": "strimzi-cluster-operator.v0.23.0-3" }'
  5. To use kafka bin scripts against pre existing kafka cluster, run managed_kafka.sh --certgen <kafka id> <Service_Account_ID> <Service_Account_Secret>. If you do not pass the <Service_Account_ID> <Service_Account_Secret> arguments, the script will attempt to create a Service_Account for you. The cert generation is already performed at the end of --create. Point the --command-config flag to the generated app-services.properties in the working directory.
  • If there is already 2 service accounts pre-existing you must delete 1 of them for this script to work

Access the Kafka Cluster using command line tools

To use the Kafka Cluster that is created with the managed_kafka.sh script with command line tools like kafka-topics.sh or kafka-console-consumer.sh do the following.

  1. Generate the certificate and app-services.properties file, run managed_kafka.sh --certgen <instance-id> where instance-id can found by running managed_kafka.sh --list and also bootstrap host to the cluster in same response.

  2. Run the following to give the current user the permissions to create a topic and group. For the <service-acct> for below script take the service account from generated app-services.properties file

    curl -vs   -H"Authorization: Bearer $(./get_access_token.sh --owner)"   http://admin-server-$(./managed_kafka.sh --list | jq -r .items[0].bootstrap_server_host | awk -F: '{print $1}')/rest/acls   -XPOST   -H'Content-type: application/json'   --data '{"resourceType":"GROUP", "resourceName":"*", "patternType":"LITERAL", "principal":"User:<service-acct>", "operation":"ALL", "permission":"ALLOW"}'
    

    then for Topic

    curl -vs   -H"Authorization: Bearer $(./get_access_token.sh --owner)"   http://admin-server-$(./managed_kafka.sh --list | jq -r .items[0].bootstrap_server_host | awk -F: '{print $1}')/rest/acls   -XPOST   -H'Content-type: application/json'   --data '{"resourceType":"TOPIC", "resourceName":"*", "patternType":"LITERAL", "principal":"User:<service-acct>", "operation":"ALL", "permission":"ALLOW"}'
    
  3. Then execute the your tool like kafka-topics.sh --bootstrap-server <bootstrap-host>:443 --command-config app-services.properties --topic foo --create --partitions 9

  4. if you created separate service account using above instructions, edit the app-services.properties file and update the username and password with clientID and clientSecret

Running E2E Test Suite (experimental)

  1. Install all cluster components using kas-installer.sh
  2. Clone the e2e-test-suite repository locally and change directory to the test suite project root
  3. Generate the test suite configuration with ${KAS_INSTALLER_DIR}/e2e-test-config.sh > config.json
  4. Execute individual test classes:
    • ./hack/testrunner.sh test KafkaAdminPermissionTest
    • ./hack/testrunner.sh test KafkaInstanceAPITest
    • ./hack/testrunner.sh test KafkaCLITest

About

kas-installer


Languages

Language:Shell 100.0%