Confluent hub installation is missing guava
mmajis opened this issue · comments
Hi!
Thanks for writing this config provider!
It looks like guava is a dependency for your connect-utils library but it's not included in the confluent hub installation for this config provider.
I installed the provider to cp-kafka-connect:6.2.0:
FROM confluentinc/cp-kafka-connect:6.2.0
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-s3:10.0.0 && \
confluent-hub install --no-prompt jcustenborder/kafka-config-provider-aws:0.1.1
The container wouldn't start until I added guava-30.1.1-jre.jar
in /usr/share/confluent-hub-components/jcustenborder-kafka-config-provider-aws/lib
and modified CUB_CLASSPATH
to add all the jars there:
export CUB_CLASSPATH="$CUB_CLASSPATH:\"/usr/share/confluent-hub-components/jcustenborder-kafka-config-provider-aws/lib/*\""
I'm not sure if this is a cub issue only or would it fail without guava if it gets past the cub health check. To begin with, I got a ClassDefNotFoundError for the config provider class itself, so definitely cub is not ready to handle config providers installed via confluent-hub without additional classpath config.
Guava related stack trace:
===> Check if Kafka is healthy ...
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/base/Strings
at com.github.jcustenborder.kafka.connect.utils.config.ConfigKeyBuilder.build(ConfigKeyBuilder.java:61)
at com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProviderConfig.config(SecretsManagerConfigProviderConfig.java:75)
at com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProviderConfig.<init>(SecretsManagerConfigProviderConfig.java:53)
at com.github.jcustenborder.kafka.config.aws.SecretsManagerConfigProvider.configure(SecretsManagerConfigProvider.java:136)
at org.apache.kafka.common.config.AbstractConfig.instantiateConfigProviders(AbstractConfig.java:572)
It seems the cub classpath is an issue in general for config providers, see confluentinc/cp-docker-images#828.
Can confirm that it's also an issue in MSK (which has isolated plugin classpaths).
is there any progress?
Any news on how this can be fixed/bypassed? Same issue happening on AWS MSK.
Yeah I'm having this same issue MSK as well.
same issue on MSK
To resolve the MSK Connect issue I downloaded the guava jar (guava-31.1-jre.jar) directly from here
When you create the custom plugin for MSK Connect, after extracting jcusten-border-kafka-config-provider-aws
drop the guava jar in the lib folder, before creating the archive that you upload to S3.
@cgn-ca thanks for the help that fixed it for me!
I ran into the same issue using this with Debezium oracle cdc connector v2.x.. It works fine without guava jar with V1.9.7. Once I copied the guava.31.1-jre.jar V2.x works fine..
To resolve the MSK Connect issue I downloaded the guava jar (guava-31.1-jre.jar) directly from here
When you create the custom plugin for MSK Connect, after extracting
jcusten-border-kafka-config-provider-aws
drop the guava jar in the lib folder, before creating the archive that you upload to S3.
Maybe I'm doing something not quite right here, but I grabbed the same file, and placed it at: debezium-connector-sqlserver-2.1.2/jcustenborder-kafka-config-provider-aws-0.1.2/lib/guava-31.1-jre.jar
and then reuploaded to S3. Does this require re-creating the custom plugin?
The debezium-connector-sqlserver-2.1.2 folder ends up being what gets zipped. Is this similar to your directory structure?
To resolve the MSK Connect issue I downloaded the guava jar (guava-31.1-jre.jar) directly from here
When you create the custom plugin for MSK Connect, after extractingjcusten-border-kafka-config-provider-aws
drop the guava jar in the lib folder, before creating the archive that you upload to S3.Maybe I'm doing something not quite right here, but I grabbed the same file, and placed it at:
debezium-connector-sqlserver-2.1.2/jcustenborder-kafka-config-provider-aws-0.1.2/lib/guava-31.1-jre.jar
and then reuploaded to S3. Does this require re-creating the custom plugin?The debezium-connector-sqlserver-2.1.2 folder ends up being what gets zipped. Is this similar to your directory structure?
The directory structure is different for debezium and jcustenborder-kafka-config-provider
What I did was I kept the jcustenborder-kafka-config-provider directory structure and copied
- All the jar files from debezium into jcustenborder-kafka-config-provider-aws-0.1.2/lib folder
- All the .md/.txt files from debezium to jcustenborder-kafka-config-provider-aws-0.1.2/doc folder
- and one json files from debezium to jcustenborder-kafka-config-provider-aws-0.1.2 folder
and renamed the folder for e.g. debezium-connector-oracle-v201-SM (SM for secret manager integration) and zipped that to debezium-connector-oracle-v201-SM.zip and uploaded to s3 and created the pluggin. That should work fine.
I was also using GSR (Glue schema registry) for AVRO messages, so I also copied those libs to lib folder.
@pc-akothapeta Thank you for the response. I found out the issue is actually because I wasn't creating a NEW custom plugin each time. Based on the AWS console, I was operating under the impression that it would read from the most recent file pointed at in S3, but it seems it does take version into account. By creating a new plugin it immediately resolved my issue, even with my old directory structure...