I was trying to install and configure kafka manager in my kafka cluster but facing issue while building kafka manager binary as below.
./sbt clean dist.
Server is not connected with internet so not able to download required binary and handing with error:
getting scala version x.x.x
Kindly help to install and configure kafka manager offline.
Thanks
You can run sbt in offline mode by setting below parameter:
$ sbt "set offline := true" run
And make sure you have all the required dependencies and components in the local .ivy cache .ivy2/cache in order to build the project offline.
Related
I have recently updated the logging jar of our application from log4j-1.2.17.jar to the latest log4j-1.2-api-2.18.0.jar.After configuring the latest .my kafka server and zookeeper server unable to start
log4j-1.2-api-2.16.0.jar
log4j-api-2.16.0.jar
log4j-core-2.16.0.jar
log4j-slf4j-impl-2.16.0.jar
slf4j-api-1.7.30.jar
how to resolve this issue after update the log4j
You cannot just upgrade JARs and hope things will work. Instead, upgrade all of Kafka and Zookeeper, as I believe they both use reload4j now.
https://issues.apache.org/jira/browse/KAFKA-13660
https://issues.apache.org/jira/browse/ZOOKEEPER-4626
I am trying to use kafka connect in a docker container with a custom connector (PROGRESS_DATADIRECT_JDBC_OE_ALL.jar) to connect to an openedge database.
I have put the JAR file in the plugin path (usr/share/java) but it won't load as a connector.
COPY Openedge/PROGRESS_DATADIRECT_JDBC_OE_ALL.jar /usr/share/java/progress
I can load another (standard) connector by putting it in the plugin path. This works
COPY confluentinc-kafka-connect-jdbc-10.3.2 /usr/share/java/confluentinc-kafka-connect-jdbc-10.3.2
A little lost on how to move forward and I'm very new to kafka. My main sources of information are
openedge to kafka streaming and How to use Kafka connect
#OneCricketeer had the solution. As a retro for me and hopefully helpful to someone else, here are my steps to make this work.
Copy the JDBC Connector to CONNECT_PLUGIN_PATH and install with confluent hub install:
COPY confluentinc-kafka-connect-jdbc-10.3.2.zip /usr/share/java
RUN confluent-hub install --no-prompt /usr/share/java/confluentinc-kafka-connect-jdbc-10.3.2.zip
Copy the driver (I ended up using openedge.jar) to the path where other jars are located (like sqllite) according to #OneCricketeer suggestion.
COPY Openedge/openedge.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib
Verify with this by enabling DEBUG as suggested by this page
Finally add a .properties file to create the connector. In my case based on the one in “openedge to kafka streaming” link above
JDBC Drivers are not Connect plugins, nor are they connectors themselves.
You'd need to set the JVM CLASSPATH environment variable for detecting JDBC Drivers, as with any Java process.
The instructions on the linked site suggest you should copy the JDBC Drivers into the directory for the existing Confluent JDBC connector. While you could use a Docker COPY command, the better way would be to use confluent-hub install
I'm trying to install MQ source & sink connectors for our confluent cloud. I've done this for on-prem apache kafka but doing the same of cloud seems to be different. Following the confluent documents says I need to have a platform installed on my local, which I did, and then to run a confluent-hub install which does install the connector on my local and then use the json for distributed instance.
My problem is when I run the json, it says the class for mq was not found, I tried to point the CLASSPATH to the dir where the jars are but still get the same error. How do I run this successfully?
--
ERROR Uncaught exception in REST call to /connectors (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.ibm.mq.IbmMQSourceConnector,
Also want to understand how installing connector on local would apply to my cloud cluster? Not sure what I'm missing!
Confluent Cloud doesn't support custom connector installation, last I checked. They need to explicitly support and offer it.
I assume you're reading some documentation that indicates you need to run your own Connect cluster (not necessarily locally), where you have full control over the connectors that are installed
Want to upgrade the zookeeper from 3.4.14 to recent/3.5.6. I have followed the link for upgrade and downloaded the zookeeper jar.
but still on restarting server, it is failing in loading the data.
Tried with snapshot.trust.empty=true flag in configuration but in this case it is not able to load the previous data.
Worked by adding snapshot.0 file in version directory in zookeeper data directory.
I have Confluent 5.0 on my local machine and trying to reading data from Rest API using Rest API Source Connect which is not part of confluent. till now i have used confluent inbuilt connectors only. Rest API source connect is open source and available on github https://github.com/llofberg/kafka-connect-rest
I have downloaded this connector from github and got stuck here.
Can anybody tell me the process to integrate this connector with confluent or how can i use this to pull the data from Rest API?
Disclaimer: There is no single answer to add an external Kafka Connect plugin; Confluent provides the Kafka Connect Maven plugin, but that doesn't mean people use it or even Maven to package their code.
If it is not on the Confluent Hub, then you'll have to build it by hand.
1) Clone the repo, and build it (install Git and Maven first)
git clone https://github.com/llofberg/kafka-connect-rest && cd kafka-connect-rest
mvn clean package
2) Create a directory for it on all Connect workers, similar to the other Connectors of Confluent Platform
mkdir $CONFLUENT_HOME/share/java/kafka-connect-rest
3) Find each of the shaded JARs (this connector happens to make multiple JARs, I don't know why...)
find . -iname "*shaded.jar" -type f
./kafka-connect-transform-from-json/kafka-connect-transform-from-json-plugin/target/kafka-connect-transform-from-json-plugin-1.0-SNAPSHOT-shaded.jar
./kafka-connect-transform-add-headers/target/kafka-connect-transform-add-headers-1.0-SNAPSHOT-shaded.jar
./kafka-connect-transform-velocity-eval/target/kafka-connect-transform-velocity-eval-1.0-SNAPSHOT-shaded.jar
./kafka-connect-rest-plugin/target/kafka-connect-rest-plugin-1.0-SNAPSHOT-shaded.jar
4) Copy each of these files into the $CONFLUENT_HOME/share/java/kafka-connect-rest folder created in step 2 for each Connect worker
5) Make sure your plugin.path of the connect-*.properties file points at the full path to $CONFLUENT_HOME/share/java
At this point, you've done all the steps that are listed in the README to build the thing and setup the plugin path, just not in Docker.
6) Start Connect (Distributed)
7) Hit GET /connector-plugins to verify the thing loaded.
8) Configure and send JSON payload to POST /connectors
I have not used this connector before, so I do not know how to configure it. Maybe see the examples or follow along with #rmoff's blog post before the KSQL stuff