Is Kafka Connect JDBC Source connector idempotent? - apache-kafka

Reading the documentation of this connector there isn't a mention about this characteristic.
So, does this connector guarantee that it won't produce duplicated records under broker crashes or whatever could happen?
Do we have to configure something to get indempotence the same way we would do with any other Kafka Producer (enable.idempotence: true)?

Kafka Connect JDBC source connector, is not idempotent at the moment. Here's the relevant KIP-318 and JIRA ticket.

Related

How to make a Data Pipeline from MQTT to KAFKA Broker to MongoDB?

How can I make a data pipeline, I am sending data from MQTT to KAFKA topic using Source Connector. and on the other side, I have also connected Kafka Broker to MongoDB using Sink Connector. I am having trouble making a data pipeline that goes from MQTT to KAFKA and then MongoDB. Both connectors are working properly individually. How can I integrate them?
here is my MQTT Connector
MQTT Connector
Node 1 MQTT Connector
Message Published from MQTT
Kafka Consumer
Node 2 MongoDB Connector
MongoDB
that is my MongoDB Connector
MongoDB Connector
It is hard to tell what exactly the problem is without more logs, please provide your connect.config as well, please check /status of your connector, I still did not understand exactly what the issue you are facing, you are saying that , MQTT SOURCE CONNECTOR sending messages successfully to KAFKA TOPIC and your MONGO DB SINK CONNECTOR successfully reading this KAFKA TOPIC and write to your mobgodb, hence your pipeline, Where is the error? Is your KAFKA is the same KAFKA? Or separated different KAFKA CLUSTERS? Seems like both localhost, but is it the same machine?
Please elaborate and explain what are you expecting? What does "pipeline" means in your word?
You need both connectors to share same kafka cluster, what does node1 and node2 mean is it seperate kafka instance? Your connector need to connect to the same kafka "node" / cluster in order to share the data inside the kafka topic one for input and one for output, share your bootstrap service parameters, share your server.properties as well of the kafka
In order to run two different connect clusters inside same kafka , you need to set in different internal topics for each connect cluster
config.storage.topic
offset.storage.topic
status.storage.topic

How to deal with Kafka JDBC Sink Connector with FME

I already setup the Kafka JDBC Sink Connector where it will consume the data from the kafka producer api, however I want to setup FME to deal with the data side and sink it to the database where it will interact with GIS (geographic information system) and it will stream the spatial data. I do not have much knowledge on FME, so are there any information/ documentation or does anyone know and can explain how to setup FME with the Kafka JDBC Sink Connector
Thank you
The FME connector appears to be a producer/consumer and has no correlation to the Kafka Connect API. https://docs.safe.com/fme/2019.1/html/FME_Desktop_Documentation/FME_Transformers/Transformers/kafkaconnector.htm
You also wouldn't "set it up with the JDBC connector". The sink writes to the database, so FME would need to read from there, or bypass Kafka Connect altogether, and use the FME supported Kafka consumer processes

Kafka 2.0 - Kafka Connect Sink - Creating a Kafka Producer

We are currently on HDF (Hortonworks Dataflow) 3.3.1 which bundles Kafka 2.0.0 and are trying to use Kafka Connect in distributed mode to launch a Google Cloud PubSub Sink connector.
We are planning on sending back some metadata into a Kafka Topic and need to integrate a Kafka producer into the flush() function of the Sink task java code.
Would this have a negative impact on the process where Kafka Connect commits back the offsets to Kafka (as we would be adding a overhead of running a Kafka producer before the flush).
Also, how does Kafka Connect get the Bootstrap servers list from the configuration when it is not specified in the Connector Properties for either the sink or the source? I need to use the same Bootstrap server list to start the producer.
Currently I am changing the config for the sink connector, adding bootstrap server list as a property and parsing it in the Java code for the connector. I would like to use bootstrap server list from the Kafka Connect worker properties if that is possible.
Kindly help on this.
Thanks in advance.
need to integrate a Kafka producer into the flush() function of the Sink task java code
There is no producer instance exposed in the SinkTask API...
Would this have a negative impact on the process where Kafka Connect commits back the offsets to Kafka (as we would be adding a overhead of running a Kafka producer before the flush).
I mean, you can add whatever code you want. As far as negative impacts go, that's up to you to benchmark on your own infrastructure. Obviously adding more blocking code makes the other processes slower overall
how does Kafka Connect get the Bootstrap servers list from the configuration when it is not specified in the Connector Properties for either the sink or the source?
Sinks and sources are not workers. Look at connect-distributed.properties
I would like to use bootstrap server list from the Kafka Connect worker properties if that is possible
It's not possible. Adding extra properties to the sink/source configs are the only way. (Feel free to make a Kafka JIRA requesting such a feature of exposing the worker configs, though)

How to load data from Kafka into CrateDB?

From the following issue at CrateDB GitHub page it seems it is not possible, i.e., the Kafka protocol is not supported by CrateDB.
https://github.com/crate/crate/issues/7459
Is there another way to load data from Kafka into CrateDB?
Usually you'd use Kafka Connect for integrating Kafka to target (and source) systems, using the appropriate connector for the destination technology.
I can't find a Kafka Connect connector for CrateDB, but there is a JDBC sink connector for Kafka Connect, and a JDBC driver for CrateDB, so this may be worth a try.
You can read more about Kafka Connect here, and see it in action in this blog series:
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
Disclaimer: I work for Confluent, and I wrote the above blog posts.

Kafka exactly once with other destination

I am using Kafka 2 and looks like exactly once is possible with
Kafka Streams
Kafka read/transform/write transactional producer
Kafka connect
Here, all of the above works between topics (source and destination is topic).
Is it possible to have exactly once with other destinations?
Source and destinations (sinks) of Connect are not only topics, but which Connector you use determines the delivery semantics, not all are exactly once
For example, a JDBC Source Connector polling a database might miss some records
Sink Connectors coming out of Kafka will send every message from a topic, but it's up to the downstream system to acknowledge that retrieval