Basically I want to send messages from a MQTT(mosquito) broker to a knative event source(kafka) . In case of a simple kafka broker I could use the confluent's kafkaconnect but in this case it's a knative event source rather than a broker. The problems lies with conversion to cloud events.
Since you have a current MQTT broker which can read/write to Kafka, you might the the Kafka source to convert the Kafka messages to CloudEvents and send them to your service.
If you're asking about how to connect the MQTT broker with Kafka, I'd suggest either finding or writing an MQTT source or using something outside the Knative ecosystem.
Related
I want to build a Kafka connector which needs to read from the Kafka topic and make a call to the GRPC service to get some data and write the whole data into another kafka topic.
I have written a Kafka Sink connector which reads from a topic and called a GRPC service. But not sure how to redirect this data into a Kafka topic.
Kafka Streams can read from topics, call external services as necessary, then forward this data to a new topic in the same cluster.
MirrorMaker2 can be used between different clusters, but using Connect transforms is generally not recommended with external services.
Or you could make your gRPC service into a Kafka producer.
I have a kubernetes service that only does something when it consumes a message from a Kafka queue. The queue does not have messages very often, and running the service as a job triggered whenever a message is found would save resources.
I see that Kubernetes has this functionality for AMQP-type message services: https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/
Is there a way to adapt this for Kafka, given that Kafka does not support AMQP? I'd switch to a different messaging system, but I have other services that also read from this queue that require Kafka.
That Kafka consumer Service is all you really need. If you want to save resources, this could be paired with KEDA autoscaler such that it scales up and down, depending on load or consumer group lag.
Or you can use serverless platforms such as KNative to trigger based on Kafka (or other messaging systems) events.
Kafka does not support AMQP
Kafka Connect should be able to bridge AMQP to Kafka. E.g. Apache Camel has connectors for both.
How can I make a data pipeline, I am sending data from MQTT to KAFKA topic using Source Connector. and on the other side, I have also connected Kafka Broker to MongoDB using Sink Connector. I am having trouble making a data pipeline that goes from MQTT to KAFKA and then MongoDB. Both connectors are working properly individually. How can I integrate them?
here is my MQTT Connector
MQTT Connector
Node 1 MQTT Connector
Message Published from MQTT
Kafka Consumer
Node 2 MongoDB Connector
MongoDB
that is my MongoDB Connector
MongoDB Connector
It is hard to tell what exactly the problem is without more logs, please provide your connect.config as well, please check /status of your connector, I still did not understand exactly what the issue you are facing, you are saying that , MQTT SOURCE CONNECTOR sending messages successfully to KAFKA TOPIC and your MONGO DB SINK CONNECTOR successfully reading this KAFKA TOPIC and write to your mobgodb, hence your pipeline, Where is the error? Is your KAFKA is the same KAFKA? Or separated different KAFKA CLUSTERS? Seems like both localhost, but is it the same machine?
Please elaborate and explain what are you expecting? What does "pipeline" means in your word?
You need both connectors to share same kafka cluster, what does node1 and node2 mean is it seperate kafka instance? Your connector need to connect to the same kafka "node" / cluster in order to share the data inside the kafka topic one for input and one for output, share your bootstrap service parameters, share your server.properties as well of the kafka
In order to run two different connect clusters inside same kafka , you need to set in different internal topics for each connect cluster
config.storage.topic
offset.storage.topic
status.storage.topic
I am trying to use Lenses MQTT source connector [https://docs.lenses.io/connectors/source/mqtt.html] with confluent kafka v5.4.
Following is my MQTT source connector properties file:
connector.class=com.datamountaineer.streamreactor.connect.mqtt.source.MqttSourceConnector
connect.mqtt.clean=false
key.converter.schemas.enable=false
connect.mqtt.timeout=1000
value.converter.schemas.enable=false
name=kmd-source-4
connect.mqtt.kcql=INSERT INTO kafka-source-topic-2 SELECT * FROM ctt/+/+/location WITHCONVERTER=`com.datamountaineer.streamreactor.connect.converters.source.JsonSimpleConverter` WITHKEY(id)
value.converter=org.apache.kafka.connect.json.JsonConverter
connect.mqtt.service.quality=1
key.converter=org.apache.kafka.connect.json.JsonConverter
connect.mqtt.hosts=tcp://ip:1883
connect.mqtt.converter.throw.on.error=true
connect.mqtt.username=username
connect.mqtt.password=password
errors.log.include.messages=true
errors.log.enable=true
I am publishing messages from UI based MQTT client MQTT fx to MQTT topic 'ctt/+/+/location' and subscribing those messages on the kafka topic 'kafka-source-topic-2'.I am using Rabbit MQ as my MQTT broker and my confluent platform and RabbitMQ are on different VMs. I do not think using RabbitMQ broker instead of Mosquitto MQTT should be a problem. Whatever and whenever I publish from MQTT fx I successfully see the messages in the MQTT fx upon subscription. I had also set up confleunt MongoDB source connector and it works seamlessly.
But my problem is - the messages published on MQTT topic are available on the mapped kafka topic in an intermittent manner. What could be the reason? I do not see any error messages in kafka connect logs. Are there any connection related properties with respect to MQTT broker that I need to specify in my MQTT source properties file? Are there any properties to be included for sure in Rabbit MQ broker? Has anyone used Lenses MQTT source and sink connectors and would like to suggest anything about them?
Your connect.mqtt.timeout is only 1 second?!? Intermittent messages suggests to me that your connector is timing out and has to re-establish its connection, and while its busy doing that, MQTT messages are coming in but not making it to the connector as it is not subscribed to the broker at that instance. Try increasing your timeout to something like 60000 (1 minute) and see what happens. Is there any reason you need it to timeout? RabbitMQ can handle connections that stay open for long periods of time with no traffic.
I am using Kafka 2 and looks like exactly once is possible with
Kafka Streams
Kafka read/transform/write transactional producer
Kafka connect
Here, all of the above works between topics (source and destination is topic).
Is it possible to have exactly once with other destinations?
Source and destinations (sinks) of Connect are not only topics, but which Connector you use determines the delivery semantics, not all are exactly once
For example, a JDBC Source Connector polling a database might miss some records
Sink Connectors coming out of Kafka will send every message from a topic, but it's up to the downstream system to acknowledge that retrieval