KAS Installer allows the deployment and configuration of Managed Kafka Service in a single K8s cluster.
- jq
- curl
- OpenShift. In the future there are plans to make it compatible
with native K8s. Currently an OpenShift dedicated based environment is needed
(Currently needs to be a multi-zone cluster if you want to create a Kafka
instance through the fleet manager by using
managed_kafka.sh
). - git
- oc
- kubectl
- openssl CLI tool
- rhoas CLI (https://github.com/redhat-developer/app-services-cli)
- A user with administrative privileges in the OpenShift cluster
- brew coreutils (Mac only)
- OSD Cluster with the following specs:
- 3 compute nodes
- Size: m5.4xlarge
- MultiAz: True
KAS Installer deploys and configures the following components that are part of Managed Kafka Service:
- MAS SSO
- Observability Operator
- sharded-nlb IngressController
- KAS Fleet Manager
- KAS Fleet Shard and Strimzi Operators
It deploys and configures the components to the cluster set in the user's kubeconfig file.
Additionally, a single Data Plane cluster is configured ready to be used, in the same cluster set in the user's kubeconfig file.
- Create and fill the KAS installer configuration file
kas-installer.env
. An example of the needed values can be found in thekas-installer.env.example
file - Run the KAS installer
kas-installer.sh
to deploy and configure Managed Kafka Service - Run
uninstall.sh
to remove KAS from the cluster. You should remove any deployed Kafkas before runnig this script.
NOTE: Installer uses predefined bundle for installing Strimzi Operator, to use a different bundle you'll need to build a dev bundle and update STRIMZI_OPERATOR_BUNDLE_IMAGE environment variable.
Use ./rhoas_login.sh
as a short cut to login to the CLI. Login using the username you specified as RH_USERNAME in the env file. The password is the same as the RH_USERNAME value.
There are a couple of things that are expected not to work when using the RHOAS CLI with a kas-installer installed instance. These are noted below.
- To create an account, run
rhoas service-account create --short-description foo --file-format properties
. - To list existing service accounts, run
rhoas service-account list
. - To remove an existing service account, run
rhoas service-account delete --id=<ID of service account>
.
- To create a cluster, run
rhoas kafka create --bypass-terms-check --provider aws --region us-east-1 --name <clustername>
. Note that--bypass-terms-check
is required as the T&Cs endpoint will not exist in your environment. The provider and region must be passed on the command line. - To list existing clusters, run
rhoas kafka list
- To remove an existing cluster, run
rhoas kafka delete --name <clustername>
.
Note: that managing ACLs via rhoas cli does not work yet (in kas-installer admin-server currently runs over plain).
Please favour using the rhoas command line. These scripts will be remove at some point soon.
The service_account.sh
script supports creating, listing, and deleting service accounts.
- To create an account, run
service_account.sh --create
. The new service account information will be printed to the console. Be sure to retain theclientID
andclientSecret
values to use when generating an access token or for connecting to Kafka directly. - To list existing service accounts, run
service_account.sh --list
. - To remove an existing service account, run
service_account.sh --delete <ID of service account>
.
- Run
get_access_token.sh
using theclientID
andclientSecret
as the first and second arguments. The generated access token and its expiration date and time will be printed to the console.
The managed_kafka.sh
script supports creating, listing, and deleting Kafka clusters.
- To create a cluster, run
managed_kafka.sh --create <cluster name>
. Progress will be printed as the cluster is prepared and provisioned. - To list existing clusters, run
managed_kafka.sh --list
. - To remove an existing cluster, run
managed_kafka.sh --delete <cluster ID>
. - To patch an existing cluster (for instance changing a strimzi version), run
managed_kafka.sh --admin --patch <cluster ID> '{ "strimzi_version": "strimzi-cluster-operator.v0.23.0-3" }'
- To use kafka bin scripts against pre existing kafka cluster, run
managed_kafka.sh --certgen <kafka id> <Service_Account_ID> <Service_Account_Secret>
. If you do not pass the <Service_Account_ID> <Service_Account_Secret> arguments, the script will attempt to create a Service_Account for you. The cert generation is already performed at the end of--create
. Point the--command-config flag
to the generated app-services.properties in the working directory.
- If there is already 2 service accounts pre-existing you must delete 1 of them for this script to work
To use the Kafka Cluster that is created with the managed_kafka.sh
script with command line tools like kafka-topics.sh
or kafka-console-consumer.sh
do the following.
-
Generate the certificate and
app-services.properties
file, runmanaged_kafka.sh --certgen <instance-id>
whereinstance-id
can found by runningmanaged_kafka.sh --list
and also bootstrap host to the cluster in same response. -
Run the following to give the current user the permissions to create a topic and group. For the
<service-acct>
for below script take the service account from generatedapp-services.properties
filecurl -vs -H"Authorization: Bearer $(./get_access_token.sh --owner)" http://admin-server-$(./managed_kafka.sh --list | jq -r .items[0].bootstrap_server_host | awk -F: '{print $1}')/rest/acls -XPOST -H'Content-type: application/json' --data '{"resourceType":"GROUP", "resourceName":"*", "patternType":"LITERAL", "principal":"User:<service-acct>", "operation":"ALL", "permission":"ALLOW"}'
then for Topic
curl -vs -H"Authorization: Bearer $(./get_access_token.sh --owner)" http://admin-server-$(./managed_kafka.sh --list | jq -r .items[0].bootstrap_server_host | awk -F: '{print $1}')/rest/acls -XPOST -H'Content-type: application/json' --data '{"resourceType":"TOPIC", "resourceName":"*", "patternType":"LITERAL", "principal":"User:<service-acct>", "operation":"ALL", "permission":"ALLOW"}'
-
Then execute the your tool like
kafka-topics.sh --bootstrap-server <bootstrap-host>:443 --command-config app-services.properties --topic foo --create --partitions 9
-
if you created separate service account using above instructions, edit the
app-services.properties
file and update the username and password withclientID
andclientSecret
- Install all cluster components using
kas-installer.sh
- Clone the e2e-test-suite repository locally and change directory to the test suite project root
- Generate the test suite configuration with
${KAS_INSTALLER_DIR}/e2e-test-config.sh > config.json
- Execute individual test classes:
./hack/testrunner.sh test KafkaAdminPermissionTest
./hack/testrunner.sh test KafkaInstanceAPITest
./hack/testrunner.sh test KafkaCLITest