I'm trying to install MQ source & sink connectors for our confluent cloud. I've done this for on-prem apache kafka but doing the same of cloud seems to be different. Following the confluent documents says I need to have a platform installed on my local, which I did, and then to run a confluent-hub install which does install the connector on my local and then use the json for distributed instance.
My problem is when I run the json, it says the class for mq was not found, I tried to point the CLASSPATH to the dir where the jars are but still get the same error. How do I run this successfully?
--
ERROR Uncaught exception in REST call to /connectors (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.ibm.mq.IbmMQSourceConnector,
Also want to understand how installing connector on local would apply to my cloud cluster? Not sure what I'm missing!
Confluent Cloud doesn't support custom connector installation, last I checked. They need to explicitly support and offer it.
I assume you're reading some documentation that indicates you need to run your own Connect cluster (not necessarily locally), where you have full control over the connectors that are installed
Related
I am trying to use kafka connect in a docker container with a custom connector (PROGRESS_DATADIRECT_JDBC_OE_ALL.jar) to connect to an openedge database.
I have put the JAR file in the plugin path (usr/share/java) but it won't load as a connector.
COPY Openedge/PROGRESS_DATADIRECT_JDBC_OE_ALL.jar /usr/share/java/progress
I can load another (standard) connector by putting it in the plugin path. This works
COPY confluentinc-kafka-connect-jdbc-10.3.2 /usr/share/java/confluentinc-kafka-connect-jdbc-10.3.2
A little lost on how to move forward and I'm very new to kafka. My main sources of information are
openedge to kafka streaming and How to use Kafka connect
#OneCricketeer had the solution. As a retro for me and hopefully helpful to someone else, here are my steps to make this work.
Copy the JDBC Connector to CONNECT_PLUGIN_PATH and install with confluent hub install:
COPY confluentinc-kafka-connect-jdbc-10.3.2.zip /usr/share/java
RUN confluent-hub install --no-prompt /usr/share/java/confluentinc-kafka-connect-jdbc-10.3.2.zip
Copy the driver (I ended up using openedge.jar) to the path where other jars are located (like sqllite) according to #OneCricketeer suggestion.
COPY Openedge/openedge.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib
Verify with this by enabling DEBUG as suggested by this page
Finally add a .properties file to create the connector. In my case based on the one in “openedge to kafka streaming” link above
JDBC Drivers are not Connect plugins, nor are they connectors themselves.
You'd need to set the JVM CLASSPATH environment variable for detecting JDBC Drivers, as with any Java process.
The instructions on the linked site suggest you should copy the JDBC Drivers into the directory for the existing Confluent JDBC connector. While you could use a Docker COPY command, the better way would be to use confluent-hub install
i have setup Aerospike with all the configuration required for Kakfa and m using confluent local cluster for Kafka and have installed https://www.confluent.io/hub/aerospike/kafka-connect-aerospike-source and have started confluent cluster but still connector is not starting
also i found that there is no jar in share folder of confluent is it still under development ?
It works and is fully generally available but requires extra licensing on the Aerospike side. I would not expect it to work with Community Edition.
Which is the better way to install kafka, zookeeper and schema registry? I already have installed kafka and zookeeper from their official site, but I noticed that on the confluent schema registry package it comes with kafka and zookeeper too, so will I need download the package and only use schema registry? or there is another link where I can download schema registry without the kafka and zookeper part?, thanks in advance
I recommend to install it all from the same place. Confluent offers the Confluent Platform community edition that includes:
Zookeeper
Kafka
Schema Registry
Kafka Connect
REST Proxy
KSQL Server
That way you will avoid any version incompatibilities. I followed the instructions from Confluent to get the entire suite installed and running.
Together with the command line tool you will be able to start and stop all necessary services with only one command:
confluent local start
They also offer Docker installation which might be of interest to you as well.
I have an exisiting Kafka Cluster. I want to install the Kafka REST Proxy:
https://github.com/confluentinc/kafka-rest
If I install confluent does that come with Kafka? I am afraid if I still it on my master Kafka node confluent will override all my settings and mess up my Kafka cluster.
How do you install Kafka REST when you have an existing Kafka cluster?
This is not made clear on their website. I have CentOS and was going to try:
sudo yum install confluent-platform-oss-2.11
Any help would be great....
Download the Confluent Platform tarball, extract it, (or preferrably use APT/YUM) then only configure and run the REST proxy via kafka-rest-start
I wouldn't recommend using APT/YUM to install the entire confluent platform if you already have an existing Kafka. You might be able to only install kafka-rest using it, though.
Alternatively, backup your existing Kafka and Zookeeper property files, then place the Confluent Platform on top of the existing files, keeping the original files. If your Kafka is an old release, take this as a good opportunity to schedule an upgrade. Downloading Confluent isn't going to overwrite anything for the upstream Apache projects version for the corresponding release. If anything, it's an extension
I'm new in Confluent world, and I know how to start kafka, zookeepers from confluent, but it's not that what I need.
I have already 3 kafka nodes and 2 zookeepers installed by Ambari. Afterwards I downloaded 3.0.0 version of Confluent and now I want to connect Confluent with already running Kafka and zookeeper. I don't want to instance new kafka server or zookeeper server which confluent is giving.
Does anyone has an idea how to accomplish that, what to actually run from Confluent and what to change.
By now I was only chaning files in ./etc/kafka or ./etc/zookeeper which are in Confluent dir. Thank you!
clarify some basics about Confluent and how manage communication between Confluent and Kafka
First things first, there is no single application called "Confluent" that can be started all on its own.
There's is nothing to configure for Kafka or Zookeeper. The Confluent Platform doesn't add anything on top of the existing Apache Kafka you have (presumably, via Hortonworks or Cloudera).
In fact, those companies do add patches to Kafka that would be slightly different than the base Apache versions you would get from Confluent.
That being said, if you read through each of extra services that Confluent provides, you'll notice either a Zookeeper or a Bootstrap server configuration option. Fill out those fields, start the respective services, and you're good to go.
what to actually run from Confluent
Look in the bin directory, you can find all the start scripts. From the comments, looks like you're trying to use Connect Distributed (which should already be installed by any recent Kafka installation, it's not Confluent specific), and Schema Registry. You'll have to be more specific about the errors that you get, but the config files are all in the etc path.
Unless you're using KSQL, REST Proxy or Control Center, there's not much to run because, as mentioned, Kafka Connect is included with the base Apache Kafka project and Hortonworks is maintaining their own Schema Registry project
2 zookeepers installed by Ambari
This is a highly non-recommended setup. Please install an odd number of Zookeepers. 3 or 5, preferably