How to copy or configure kafka connect plugin files? - apache-kafka

I have downloaded plugin files from https://www.confluent.io/connector/kafka-connect-cdc-microsoft-sql/,
It has three folders lib, etc, doc, manifest.json. etc has connect-avro-docker.properties, mssqlsource.properties, repro.properties. I can add CONNECT_PLUGIN_PATH to lib, but what about these config files?
In https://docs.confluent.io/current/connect/userguide.html page they did not give clear instructions on where to copy these files.
How to copy or configure kafka connect plugin files for confluent?
Any direction is appreciated.
I installed only kafka-connect, schema-registry, kafka-broker, zookeeper. I didn't find a way to install hub on windows.

The instructions are here
You need to point plugin.path line in the Connect properties to the absoulte path of the extracted lib folder - the parent folder of where the JARs are located.
You can also use confluent-hub tool, which should set this all up for you.

Related

Is there a documentation/blog/example on creating Kafka sink or source plugins?

We are planning to create our own repo of connector (sink or source plugins) for Apache Kafka like one here
We tried to search for the documentation or help on how to create a plugin jar for Kafka.
There is no mention of developing a plugin in the official documentation from apache Kafka.
Any help or pointer will be helpful, can share it back with the open community once developed.
Here is guide on How to Build a Connector
As Well here is a Connector Developer Guide
Developing a connector only requires implementing two interfaces, the Connector and Task
Refer to the example source code for full examples for simple example
Once you’ve developed and tested your connector, you must package it so that it can be easily installed into Kafka Connect installations. The two techniques described here both work with Kafka Connect’s plugin path mechanism.
If you plan to package your connector and distribute it for others to use, you are obligated to properly license and copyright your own code and to adhere to the licensing and copyrights of all libraries your code uses and that you include in your distribution.
Creating an Archive
The most common approach to packaging a connector is to create a tarball or ZIP archive. The archive should contain a single directory whose name will be unique relative to other connector implementations, and will therefore often include the connector’s name and version. All of the JAR files and other resource files needed by the connector, including third party libraries, should be placed within that top-level directory. Note, however, that the archive should never include the Kafka Connect API or runtime libraries.
To install the connector, a user simply unpacks the archive into the desired location. Having the name of the archive’s top-level directory be unique makes it easier to unpack the archive without overwriting existing files. It also makes it easy to place this directory on Installing Connect Plugins or for older Kafka Connect installations to add the JARs to the CLASSPATH.
Creating an Uber JAR
An alternative approach is to create an uber JAR that contains all of the connector’s JAR files and other resource files. No directory internal structure is necessary.
To install, a user simply places the connector’s uber JAR into one of the directories listed in Installing Connect Plugins.

How to install mongodb connector Kafka under windows?

I have installed Kafka in Windows.
Also unpacked connector files jar.
Where to specify path to these files and how to configure the connector and launch it?
In connect properties file, there is plugin.path, which should be set to the absolute path of the parent directory of all connectors folders.
https://docs.confluent.io/current/connect/managing/community.html

Kafka connect throws ClassNotFoundException when plugin.path has comma seperated values

I am using Kafka connect to create a MQTT Kafka connection.I put all the kafka MQTT connector specific jar downloaded from confluent site to "/data" folder. And accordingly update the "connect-standalone.properties" file to reflect the plugin path i.e
plugin.path=/opt/kafka_2.11-2.1.1/libs,/data
When I run the Kafka Connect
./connect-standalone.sh ../config/connect-standalone.properties ../config/connect-mqtt-source.properties
I get following error :
[2019-07-18 10:26:05,823] INFO Loading plugin from:
/data/kafka-connect-mqtt-1.2.1.jar
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-07-18 10:26:05,829] ERROR Stopping due to error
(org.apache.kafka.connect.cli.ConnectStandalone:128)
java.lang.NoClassDefFoundError:
com/github/jcustenborder/kafka/connect/utils/VersionUtil
at io.confluent.connect.mqtt.MqttSourceConnector.version(MqttSourceConnector.java:29)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:344)
screenshot as below :
Please note that "connect-utils-0.3.140.jar" is present in "/data" folder as highlighted by red underlines.
Now If I make a soft link screenshot below or copy all the jars from "/data" folder and update the plugin path to :
plugin.path=/opt/kafka_2.11-2.1.1/libs
Kafka connect works perfectly fine.
Any help why it does not work in the first scenario i.e kafka connector specific jars in different folders
From Kafka Connect user guide on Confluent page:
...
Kafka Connect isolates each plugin from one another so that libraries in one plugin are not affected by the libraries in any other plugins. This is very important when mixing and matching connectors from multiple providers.
A Kafka Connect plugin is:
an uber JAR containing all of the classfiles for the plugin and its third-party dependencies in a single JAR file; or
a directory on the file system that contains the JAR files for the plugin and its third-party dependencies.
In your case you have to put plugin jars in one folder, ex /data/pluginName not directly in /data/
More details can be found here: Installing Plugins

How we can add more then one plugins in class path of kafka connect?

How we can add more then one plugins externally in kafka connect ???
https://github.com/debezium/debezium-examples/blob/master/tutorial/debezium-with-oracle-jdbc/Dockerfile
You don't add it to the CLASSPATH, you use plugin.path to point to the folder(s) that contain the plugin JARs.
You can see more information here: https://docs.confluent.io/current/connect/userguide.html#installing-plugins
Used below steps :
1 Added new folder like below in docker
https://github.com/debezium/debezium-examples/blob/master/unwrap-smt/debezium-jdbc-es/Dockerfile#L3
2 Added same folder in class path
3 Downloaded custom plugin and using docker volume mount added provided that to docker container
I am able to get plugin in plugin list api

How to load configuration files for apps bundled using sbt-native-packager during runtime

I built a universal tarball using sbt-native-packager.
How do I load configuration files like c3p0 and application.conf from /etc/myapp or anyother custom location when I start the app.
I don't want the config files to be part of the distribution tarball itself.
I believe you can use typesafe config's "include" feature to grab from a direct location.
See https://github.com/typesafehub/config#features-of-hocon
That said, this would require you to create different configurations based on where you're installing, if you wanted a global file as the config file.