I am running a dedicated MirrorMaker cluster and want to perform my SMT transformation for the records. Could you advice where should I put the jar with my code, i. e. where should I define plugin.path property?
where should I define plugin.path property?
The worker property file when you start either connect-mirror-maker or connect-distributed
where should I put the jar
You need to make a subfolder under a directory listed in plugin.path, then put JARs there.
Related
I am using Kafka Connect in MSK.
I have defined a plugin that points to a zip file in s3 - this works fine.
I have implemented SMT and uploaded the SMT jar into the same bucket and folder as the zip file of the plugin.
I define a new connector and this time I add the SMT using
transforms
I get an error message that the Class com.x.y.z.MySMT could not be found.
I verified that the jar is valid and contains the SMT.
Where should I put the SMT jar in order to make Kafka connect loading it?
Pushing the SMT jar into the zip (under /lib) solved the class not found issue.
I'm developing some custom topic name mapping and a jar file has been produced out of it.
And since I'm using MirrorMaker v1, these variables have also been added KAFKA_MIRRORMAKER_MESSAGE_HANDLER and KAFKA_MIRRORMAKER_MESSAGE_HANDLER_ARGS inside the KafkaMirrorMaker yaml file.
But I don't know how to physically add this custom jar file into the KafkaMirrorMaker pod. I have checked the CRD of the KafkaMirrorMaker but can't find any clue yet.
So, is there a way to let KafkaMirrorMaker download some file(s) / artifact(s) and include the jar file(s) into the classpath so that the custom MessageHandler can find it?
The helm install command is used to deploy the Mirror Maker. The apiVersion of the KafkaMirrorMaker I'm using currently: kafka.strimzi.io/v1beta2
Based on the strimzi tag, I assume you use Strimzi's Mirror Maker v1? To add your own JAR, you would need to build a custom container image.
You could modify the Strimzi project sources and build everything from scratch (you can add your JAR as a dependency into the 3rd party libs in `docker-images/kafka/...). But that is rather complicated as you build the whole project.
The easiest way is to just write your own Dockerfile and use the existing Strimzi image as a base image. For example:
FROM quay.io/strimzi/kafka:0.26.0-kafka-3.0.0
USER root:root
COPY ./my-jar.jar /opt/kafka/libs/my-jar.jar
USER 1001
You can build this Dockerfile and push it into your own Docker registry (Docker Hub, Quay, whetever you use). You should make sure the FROM uses the right image based on the Strimzi version you use and KAfka version you use.
And once you have it, you have to tell Strimzi to use this image. You can do that either using the .spec.image option in the KafkaMirrorMaker custom resource. Or you can just change the environment variable STRIMZI_KAFKA_MIRROR_MAKER_IMAGES in the Strimzi Cluster Operator deployment and update the images which should be used there.
I have set up Kafka's spooldir connector on a unix machine and it seems to work well. I would like to know if there are a few things that can be done with spooldir
I want to create multiple directories inside the spooldir-scanning
file path, create files of the provided format inside them and scan
those too. How do i accomplish this?
I do not want the source files to move to different directories after
completion/error. I tried providing the same path for source, target
and error but the connector would not accept the value. Any way
around these?
I have downloaded the s3-source connector zip file as given in the confluent web page. I am not getting where to place the extracted files. I am getting the following error
Please guide me. To load the connector, Iam using this command -
confluent local load s3-source -- -d /etc/kafka-connect-s3/confluentinc-kafka-connect-s3-source-1.3.2/etc/quickstart-s3-source.properties
I am not getting where to place the extracted files
If you used confluent-hub install, it would put them in the correct place for you.
Otherwise , you can put them whereever, if you update plugin.path in the Connect properties to include the parent directory of the JARs for the connector
Extract the zip file whether its a source connector or sink connector and place the whole folder with all the jars inside of it under: plugin.path which you have set
We have a grails project in production. Grails version is 2.3.4. We are using MongoDB for persistence.
Earlier, we had all the config hardcoded inside DataSource.groovy. The client demanded that the config be outside the .war file. So we moved it to a .groovy file. Everything was working fine, including the repicaSet config.
Then the client came up with another requirement. Since a groovy file can be used to give any programmable instruction, it can be misused by a person whose job is just to update a property file. So they want all the config in a .properties file.
here is the contents of my .properties file
grails.mongo.host=10.3.253.201
grails.mongo.port=27017
grails.mongo.databaseName=testDb
grails.mongo.username=mongouser
grails.mongo.password=mongouser
Where can I give the details of replicaSet? Thanks in advance.
I would like to answer this question in case someone else is facing the same isssue.
grails.mongo.uri=mongodb://10.3.253.201,10.3.253.202,10.3.253.203/test
grails.mongo.host=10.3.253.201
grails.mongo.port=27017
grails.mongo.databaseName=test
grails.mongo.username=mongouser
grails.mongo.password=mongouser
This is the content of my config.properties file and it started working for me.
201 was the primary node and the other two were backup in my cluster.
Regards.