Kafka JMS Source Connector write message to more topics - apache-kafka

I have an ActiveMQ Artemis JMS queue and there is a Kafka Source Connector. I want to write message from this queue to multiple topics parallel. I found that the Simple Message Transform could be the solution. I tried to configure RegexRouter, but I can only change the name of the topic?
I tried to create 3 connector instances but only one can receive message. I guess because message is deleted from queue at the first read.

Related

How I can use MQTT and Kafka to connect them to Eclipse Ditto?

I have a question about connection between Eclipse Ditto, MQTT and Kafka.
It is possible to received data on Ditto via MQTT broker and then send this data on a kafka topic?
I have created a connection source on Ditto to consume message from broker but now I'm stuck on kafka side,
I think that I need to create a MQTT target connection to send data on a different topic and then create a connection source and target for Kafka, so in this way with the first type of connection (source) I can consume message from the topic and with target connection I can send the data on another topic.
Summary:
I need to consume data from MQTT broker with Ditto and then send this data on a kafka topic to store them in a time series db (InfluxDB).
I hope is all clear, any suggestions will be helpful, thanks!

Can kafka publish messages to AWS lambda

I have to publish messages from a kafka topic to lambda to process them and store in a database using a springboot application, i did some research and found something to consume messages from kafka
public Function<KStream<String, String>, KStream<String, String>> process(){} however, im not sure if this is only used to publish the consumed messages to another kafka topic or can be used as an event source to lambda, I need some guidance on consuming and converting the consumed kafka message to event source.
Brokers do not push. Consumers always poll.
Code shown is for Kafka Streams API, which primarily writes to new Kafka topics. While you could fire HTTP events to start a lambda, that's not recommended.
Alternatively, Kafka is already supported as an event source. You don't need to write any consumer code.
https://aws.amazon.com/about-aws/whats-new/2020/12/aws-lambda-now-supports-self-managed-apache-kafka-as-an-event-source/
This is possible from MSK or a self managed Kafka
process them and store in a database
Your lambda could process the data and send to a new Kafka topic using a producer. You can then use MSK Connect or run your own Kafka Connect cluster elsewhere to dump records into a database. No Spring/Java code would be necessary.

Regd. WSO2 ESB Kafka Consumer Project

I am creating two WSO2 ESB Integration projects for the purpose of one for Kafka Message Producer and second for Kafka Message Consumer. I created Kafka Message Producer successfully on invocation of REST API and message going to be dropped on the topic. When I am creating Kafka message consumer there is no transport such as Kafka for proxy to listen on and as per the documentation we need to use IP endpoint for the same. My requirement is Kafka Message should be received at consumer automatically from a topic (similar to JMS Consumer) when message is available on the topic. So how can i achieve that using IP endpoint with Kafka details.
any inputs?
If you want to consume messages in a Kafka topic using an EI server, you need to use Kafka inbound endpoint. This inbound endpoint will periodically poll the messages from the topic and the polling interval is configurable. Refer to the documentation [1] for more information on this.
[1]-https://docs.wso2.com/display/EI660/Kafka+Inbound+Protocol
I resolved the issue with separate wso2 consumer Integration project. Its getting issues while creating the inbound endpoint type as kafka.

Message flow intermittent when using Lenses MQTT Source connector with confluent kafka

I am trying to use Lenses MQTT source connector [https://docs.lenses.io/connectors/source/mqtt.html] with confluent kafka v5.4.
Following is my MQTT source connector properties file:
connector.class=com.datamountaineer.streamreactor.connect.mqtt.source.MqttSourceConnector
connect.mqtt.clean=false
key.converter.schemas.enable=false
connect.mqtt.timeout=1000
value.converter.schemas.enable=false
name=kmd-source-4
connect.mqtt.kcql=INSERT INTO kafka-source-topic-2 SELECT * FROM ctt/+/+/location WITHCONVERTER=`com.datamountaineer.streamreactor.connect.converters.source.JsonSimpleConverter` WITHKEY(id)
value.converter=org.apache.kafka.connect.json.JsonConverter
connect.mqtt.service.quality=1
key.converter=org.apache.kafka.connect.json.JsonConverter
connect.mqtt.hosts=tcp://ip:1883
connect.mqtt.converter.throw.on.error=true
connect.mqtt.username=username
connect.mqtt.password=password
errors.log.include.messages=true
errors.log.enable=true
I am publishing messages from UI based MQTT client MQTT fx to MQTT topic 'ctt/+/+/location' and subscribing those messages on the kafka topic 'kafka-source-topic-2'.I am using Rabbit MQ as my MQTT broker and my confluent platform and RabbitMQ are on different VMs. I do not think using RabbitMQ broker instead of Mosquitto MQTT should be a problem. Whatever and whenever I publish from MQTT fx I successfully see the messages in the MQTT fx upon subscription. I had also set up confleunt MongoDB source connector and it works seamlessly.
But my problem is - the messages published on MQTT topic are available on the mapped kafka topic in an intermittent manner. What could be the reason? I do not see any error messages in kafka connect logs. Are there any connection related properties with respect to MQTT broker that I need to specify in my MQTT source properties file? Are there any properties to be included for sure in Rabbit MQ broker? Has anyone used Lenses MQTT source and sink connectors and would like to suggest anything about them?
Your connect.mqtt.timeout is only 1 second?!? Intermittent messages suggests to me that your connector is timing out and has to re-establish its connection, and while its busy doing that, MQTT messages are coming in but not making it to the connector as it is not subscribed to the broker at that instance. Try increasing your timeout to something like 60000 (1 minute) and see what happens. Is there any reason you need it to timeout? RabbitMQ can handle connections that stay open for long periods of time with no traffic.

Kafka exactly once with other destination

I am using Kafka 2 and looks like exactly once is possible with
Kafka Streams
Kafka read/transform/write transactional producer
Kafka connect
Here, all of the above works between topics (source and destination is topic).
Is it possible to have exactly once with other destinations?
Source and destinations (sinks) of Connect are not only topics, but which Connector you use determines the delivery semantics, not all are exactly once
For example, a JDBC Source Connector polling a database might miss some records
Sink Connectors coming out of Kafka will send every message from a topic, but it's up to the downstream system to acknowledge that retrieval