We are planning to upgrade our existing apache kafka cluster to confluent kafka while upgrading do we have any data loss in the topics?? And also main reason we are upgrading is to use s3 sink connector is there any connector which is available in apache kafka itself?
Unless you want to migrate to Confluent Server, there is nothing you need to migrate; Confluent Platform includes Apache Kafka
Kafka Connect, on the other hand, is a pluggable environment, and doesn't require Confluent tools/systems other than the specific JAR file(s) for the S3 Connector
You can use S3 sink connector from apache camel
https://camel.apache.org/camel-kafka-connector/next/reference/connectors/camel-aws-s3-sink-kafka-sink-connector.html
Just need to download the S3 sink connector jar file from this link:
https://camel.apache.org/camel-kafka-connector/next/reference/index.html
Copy the jar file in connector plugins path. It depends on value of your properties. By default on the relative path: plugins/connectors, or set in plugin.path property.
So you don't need to restart and lost any data.
Related
I am trying to configure Kafka connect distributed, but i didn't find any jars for configure Kafka connect in Azure HD Insight.
Can you please help me with my above query.
https://techcommunity.microsoft.com/t5/analytics-on-azure/kafka-connect-with-hdinsight-managed-kafka/ba-p/1547013
Download the relevant Kafka Plugins from the Confluent Hub to your plugin.path setting
Kafka Connect plugin for streaming data to kafka:
Azure Blob Storage Sink Connector
https://www.confluent.io/hub/confluentinc/kafka-connect-azure-blob-storage
What is the difference between using Apache Kafka Connect API and Confluent Kafka Connect API ?
As mike mentioned, there is no core difference between Apache kafka connect and confluent kafka connect.
As an example of using JDBC Connector plugin from Confluent with MySQL database to read data from MySQL and send it to kafka topic.
For a quick demo on your laptop, follow below main steps:
Download Apache Kafka (from either Apache site, or you can download Confluent Platform)
Run single Kafka broker using kafka-server-start script.
Download kafka-connect-jdbc from Confluent Hub
Edit plugin.path in connect-standalone.properties to include the full path to extracted kafka-connect-jdbc files.
Download and copy mysql driver into kafka-connect-jdbc folder with other JARs (you should see sqlite JAR is already there)
create jdbc source connector configuration file
run Kafka connect in standalone mode with jdbc source connector configuration
$KAFKA_HOME/bin/connect-standalone.sh ../config/connect-standalone.properties ../config/jdbc-connector-config.properties
Useful links
https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/
https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html
If you want to write code, then learn kafka producer api usage.
https://docs.confluent.io/current/clients/producer.html
https://kafka.apache.org/documentation/#producerapi
I have a Kafka cluster that I work with which is managed by my team and runs on Kubernetes. We want to install the Kafka connect via helm into our cluster to work with our Kafka. This Kafka we are running is NOT the confluent platform Kafka. Is there a good way to do this? I was wondering if this would work cp-helm-charts. Will using the confluentinc Kafka connect container be compatible with my Kafka cluster that is on non-confluent platform?
Kafka Connect has never been labelled as a Confluent Platform exclusive product.
The Framework is entirely Apache 2.0 Licensed and Open Source.
Similarly, "Confluent Platform Kafka" is just Apache Kafka
I would integrate kafka-rest-proxy confluent solution with apache kafka 2.0.0
Could some one explain how I install only kafka rest proxy for my cluster with 3 nodes kafka and 3 znodes ?
All of the Confluent tools work with Apache Kafka.
There is no individual download of the REST Proxy, so you would have to use Docker or download the full Confluent platform.
If not using Docker, you can find the kafka-rest.properties in the etc/kafka-rest folder, and so you would edit it with at least the bootstrap servers.
Find other config options here
Then run this to start it from the extract Confluent platform download
./bin/kafka-rest-start ./etc/kafka-rest/kafka-rest.properties
Is it possible to capture IBM MQ data with Kafka-Cloudera?
The confluent company offers connectors to capture IBM MQ data, but I'm not sure if I can do the same with Kafka-Cloudera.
Yes.
Kafka Connect is not a framework specific to Confluent or Cloudera. It is built into all Apache Kafka offerings.
If Confluent Platform includes a specific connector as part of the OSS offering, for which you can individually download and use the connector, then that's a separate issue.