strimzi / strimzi-kafka-oauth

OAuth2 support for Apache Kafka® to work with many OAuth2 authorization servers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OAuth keycloak authentication setup failing

bharat15-ml opened this issue · comments

Hi,
I have followed this documentation to setup strimzi kafka-2.7.0 keycloak authentication- https://strimzi.io/blog/2019/10/25/kafka-authentication-using-oauth-2.0/

As the blog is divided into three sections-

  1. Setting up keycloak service.
  2. Start kafka cluster.
  3. Test Authentication through kafka producer and consumer clients.

Able to proceed with first two sections and properly configured keycloak and kafka cluser which is up and running fine.
Keycloak is setup with Enterprise certificate provider instead of self-signed one and its root.crt is used to create truststore "ca-trustore" and used in kafka deployment as mentioned in document, till here everything is working fine.

I have created three listeners while cluster creation- which need to specify in "apiVersion: kafka.strimzi.io/v1beta2".

I have created three listeners while cluster creation- providing below partial kafka-deployment.yaml file

listeners:
- name: plain
   port: 9092
   type: internal
   tls: false
- name: tls
   port: 9093
   type: internal
   tls: true
- name: external
   port: 30547
   type: loadbalancer
   tls: true
   authentication: 
     type: oauth
     validIssuerUri: https://153.88.30.40:30456/auth/realms/test
     jwksEndpointUri: https://153.88.30.40:30456/auth/realms/test/protocol/openid-connect/certs
     clientId: kafka-broker
     clientSecret:
       secretname: broker-oauth-secret
       key: secret
     userNameClaim: preferred_username
	 maxSecondsWithoutReauthentication: 3600
	 tlsTrustedCertificates:
	   - secretName: ca-truststore
	     certificate: rootCA.pem
	   disableTlsHostnameVerification: true

Now In the third section - Configuring application client pods - while exporting certificate and packaging into client truststore needs some clarifications-

kubectl get secret my-cluster-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 --decode > kafka.crt
# inspect the certificate
#openssl x509 -text -noout -in kafka.crt

extracted the password used in strimzi kafka cluster deployment--

kubectl get secret -n kafka my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password

then used this password values from ca.password file.

Now I have kafka.crt and also got password, as per documentation now will create kafka-client-truststore.p12 udsing kafka.crt and ca.crt.

but one question here, what is ca.crt file mentioned in documentation and how to get it?

I thought ca.crt is keycloak root certificate, So i have used rootCA.pem for my case, please let me know whether it is correct or wrong?

here while creating .p12 file ca.crt is keycloak root certificate which we created in step-1 or it is something else? i have replaced the ca.crt value with rootCA.pem file for my case.

export PASSWORD=********

keytool -keystore kafka-client-truststore.p12 -storetype PKCS12 -alias ca -storepass $PASSWORD -keypass $PASSWORD -import -file ca.crt <rootCA.pem> -noprompt

keytool -keystore kafka-client-truststore.p12 -storetype PKCS12 -alias kafka -storepass $PASSWORD -keypass $PASSWORD -import -file kafka.crt -noprompt
kubectl create secret generic kafka-client-truststore -n clients --from-file=kafka-client-truststore.p12

Now have created one more client kafka-producer in keycloak.

export OAUTH_CLIENT_ID=kafka-producer
export OAUTH_CLIENT_SECRET=*****************

then executed below commands, mentioned in documents-

## here used the same exported password of kafka cluster .
export PASSWORD=********

then used below deployment file and client pod (kafka-producer)came up and running.

apiVersion: v1
kind: Pod
metadata:
  name: kafka-client-shell
spec:
  containers:
  - name: kafka-client-shell
    image: strimzi/kafka:0.23.0-kafka-2.7.0
    command: ["/bin/bash"]
    args: [ "-c", 'for((i=0;;i+=1)); do echo "Up time: \$i min" && sleep 60; done' ]
    env:
    - name: CLASSPATH
      value: /opt/kafka/libs/kafka-oauth-client-*:/opt/kafka/libs/kafka-oauth-common-*
    - name: OAUTH_TOKEN_ENDPOINT_URI
      value: https://153.88.30.40:30456/auth/realms/test/protocol/openid-connect/token
    volumeMounts:
    - name: truststore
      mountPath: "/opt/kafka/certificates"
      readOnly: true
 volumes:
  - name: truststore
    secret:
      secretName: kafka-client-truststore

then after logging to pod have executed below comands-

export KAFKA_OPTS=" \
-Djavax.net.ssl.trustStore=/opt/kafka/certificates/kafka-client-truststore.p12 \
-Djavax.net.ssl.trustStorePassword=$PASSWORD \
-Djavax.net.ssl.trustStoreType=PKCS12"

 ## here have passed LoadBlancer IP as bootstrap server of kafka cluster

 bin/kafka-console-producer.sh --broker-list \
   153.88.17.164:30547 --topic my-topic \
   --producer-property 'security.protocol=SASL_SSL' \
   --producer-property 'sasl.mechanism=OAUTHBEARER' \
   --producer-property 'sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;' \
   --producer-property 'sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler'

getting following errors:

  1. ERROR java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider : SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)
  2. Caused by : java.security.KeyStoreException: Problem accessing trust store
  3. Caused by : java.io.IOException: keystore password was incorrect
  4. filed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded.
  5. javax.security.auth.login.loginException: An internal error occured while retrieving from callback handler.

Please help me on this on resolving this issue?? Thanks!!

here while creating .p12 file ca.crt is keycloak root certificate which we created in step-1 or it is something else? i have replaced the ca.crt value with rootCA.pem file for my case.

It is instructing you to use the Cluster CA of the Kafka cluster. So I guess this is not your Keycloak certificate but the KAfka cluster certificate.

getting following errors:

Sharing the full log is always better than sharing some extract. But it suggests that your keystore / truststore is not in the right format or the password for it is wrong.

Thanks scholzj, I thought below command is used for kafka cluster CA..
kubectl get secret my-cluster-cluster-ca-cert -n kafka -o jsonpath='{.data.ca.crt}' | base64 --decode > kafka.crt

Please let me know what kafka.crt and ca.crt as per documentation??

It is the Kafka Cluster CA certificate. It is a secret created by Strimzi, you just extract the certificate.

Thanks!! So to create kafka-client-truststore.p12 required two files one is kafka.crt and another is ca.crt.

I have extracted kafka.crt file with command - kubectl get secret my-cluster-cluster-ca-cert -n kafka -o jsonpath='{.data.ca.crt}' | base64 --decode > kafka.crt

Please let me know what command to use to extract ca.crt file and have used the same extracted password while creatiing .p12 files, I guess this is also wrong.

Please provide correct commands to use for bellows -

export PASSWORD=********

keytool -keystore kafka-client-truststore.p12 -storetype PKCS12 -alias ca -storepass $PASSWORD -keypass $PASSWORD -import -file ca.crt <rootCA.pem> -noprompt

keytool -keystore kafka-client-truststore.p12 -storetype PKCS12 -alias kafka -storepass $PASSWORD -keypass $PASSWORD -import -file kafka.crt -noprompt
kubectl create secret generic kafka-client-truststore -n clients --from-file=kafka-client-truststore.p12

extracted the password used in strimzi kafka cluster deployment--

kubectl get secret -n kafka my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password

You don't need this password. It is generated by Strimzi operator to protect the content of the keystores and truststores used by brokers for the server-side of TLS connectivity.

The kafka.crt file that you exported contains the public certificate only and is not protected by any password.

Now I have kafka.crt and also got password, as per documentation now will create kafka-client-truststore.p12 udsing kafka.crt and ca.crt.

You need kafka.crt and keycloak certificate (looks like rootCA.pem in your case) to create your kafka-client-trustore.

I thought ca.crt is keycloak root certificate, So i have used rootCA.pem for my case, please let me know whether it is correct or wrong?

Assuming that your Keycloak certificate is signed by rootCA then that's the correct certificate.

**here while creating .p12 file ca.crt is keycloak root certificate which we created in step-1 or it is something else?

Yes in the blog post we create a ca.crt for keycloak.

export PASSWORD=********

This password is something you set yourself.

keytool -keystore kafka-client-truststore.p12 -storetype PKCS12 -alias ca -storepass $PASSWORD -keypass $PASSWORD -import -file ca.crt <rootCA.pem> -noprompt

If you say -file ca.crt it means that the file called ca.crt must exist. In your case it should simply be -file rootCA.pem.

## here used the same exported password of kafka cluster .
export PASSWORD=********

The exported password is irrelevant. You need a password for your kafka-client-truststore.p12 which you set when creating the .p12 file.

then used below deployment file and client pod (kafka-producer)came up and running.

apiVersion: v1
kind: Pod
metadata:
name: kafka-client-shell
spec:
containers:

  • name: kafka-client-shell
    image: strimzi/kafka:0.23.0-kafka-2.7.0
    command: ["/bin/bash"]
    args: [ "-c", 'for((i=0;;i+=1)); do echo "Up time: $i min" && sleep 60; done' ]
    env:
    • name: CLASSPATH
      value: /opt/kafka/libs/kafka-oauth-client-:/opt/kafka/libs/kafka-oauth-common-

This is not enough jars. You also need dependencies / helper libraries. It's easiest to just include /opt/kafka/libs/*.

export KAFKA_OPTS="
-Djavax.net.ssl.trustStore=/opt/kafka/certificates/kafka-client-truststore.p12
-Djavax.net.ssl.trustStorePassword=$PASSWORD
-Djavax.net.ssl.trustStoreType=PKCS12"

You refer to $PASSWORD in a new shell. Have you set it to some value first?

getting following errors:

  1. ERROR java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider : SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)
  2. Caused by : java.security.KeyStoreException: Problem accessing trust store
  3. Caused by : java.io.IOException: keystore password was incorrect
  4. filed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded.
  5. javax.security.auth.login.loginException: An internal error occured while retrieving from callback handler.

It sounds like the secret you used to create the kafka-client-truststore is not the same as the $PASSWORD you provided. Maybe it's just an issue of the shell ENV. You exec into your pod and refer to $PASSWORD env variable, but did you define PASSWORD env var in your exec shell?

Thanks @mstruk!! Able to deploy everything fine now.
Now when I am able to connect to kafka cluster and request forwarded to keycloak.
getting Error for Keycloak loadbalancer IP i.e. 153.88.30.40 -

**"ERROR No subject alternative names matching IP address 153.88.30.40 found (org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule)"**

It should be related to SAN /etc/hosts DNS Entry? Any input on this.

Keycloak / RH-SSO is typically set up to be widely accessible across networks so it's set up with a fully qualified domain name in the certificate.

The ususal approach is to have certificate present as something like sso.example.org and in all urls to Keycloak you then use https://sso.example.org/auth/realms...

For your setup you can use the IP but then the IP address needs to be contained in the certificate that Keycloak presents with, which means you need to control how the certificate is created when issued.

So, either make sure the certificate contains the Keycloak IP, or find out which hostname / FQDN is declared in the certificate and use that. But you have to access your Keycloak using the same hostname / IP and port from all the clients (Kafka brokers and Kafka clients).

Thanks @mstruk, After making all the correction, one time it was tested and able to produce and consume data from kafka topic. Now again when I am retesting getting below errors:-

Executed below commands on kafka producer client-

bin/kafka-console-producer.sh --broker-list 153.88.17.164:30547 --topic my-topic \
--producer-property 'security.protocol=SASL_SSL' \
--producer-property 'sasl.mechanism=OAUTHBEARER' \
--producer-property 'sasl.jaas.config=org.aache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;' \

--producer-property 'sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler'

getting below errors:

org.apache.kafka.common.KafkaException: Failed to construct kafka producer

    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:302)
    at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:45)
    at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
   
 Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: No LoginModule found for org.aache.kafka.common.security.oauthbearer.OAuthBearerLoginModule

at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:172)
    at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:157)
    at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:73)
    at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
    at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:449)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:430)
    ... 3 more

Caused by: javax.security.auth.login.LoginException: No LoginModule found for 
org.aache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
    at java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:710)
    at java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
    at java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
    at java.base/java.security.AccessController.doPrivileged(Native Method)
    at java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
    at java.base/javax.security.auth.login.LoginContext.login(LoginContext.java:574)

oauthbearer.internals.expiring.ExpiringCredentialRefreshingLogin.login(ExpiringCredentialRefreshingLogin.java:204)
oauthbearer.internals.OAuthBearerRefreshingLogin.login(OAuthBearerRefreshingLogin.java:150)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:62)
    at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:105)
    at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:158)
    ... 8 more

Please suggest something on probable cause of this error and how to resolve?

Thanks!! Able to resolve issues, it's working fine,closing this thread.