I am writing HTTP Sink plugin for Kafka connect. Its purpose is to send HTTP requests for each message in the configured Kafka topic. I want to send the message to dead letter queue in case HTTP request fails. Can I make use of dead letter queue configuration provided in sink plugin ?
The reason for this question is that, it has been mentioned in kafka connect documentation and several blogs that only errors in transformer and converter will be send to dead letter queue and not the ones during PUT. Since the task of sending the http request is done in PUT. So I am think, is there a way to send failed http messages to DLQ ? If not, is it possible to send the message to some other kafka topic for further processing ?
According to #Neil this might be informative,
Kip 610 (implemented in Kafka 2.6) added DLQ support for issues when interacting with end system. Kip 298 added DLQ but only on issues prior to sink system interaction.
Check the versions of your connect cluster and sink version and see if it supports it.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
Related
I am creating two WSO2 ESB Integration projects for the purpose of one for Kafka Message Producer and second for Kafka Message Consumer. I created Kafka Message Producer successfully on invocation of REST API and message going to be dropped on the topic. When I am creating Kafka message consumer there is no transport such as Kafka for proxy to listen on and as per the documentation we need to use IP endpoint for the same. My requirement is Kafka Message should be received at consumer automatically from a topic (similar to JMS Consumer) when message is available on the topic. So how can i achieve that using IP endpoint with Kafka details.
any inputs?
If you want to consume messages in a Kafka topic using an EI server, you need to use Kafka inbound endpoint. This inbound endpoint will periodically poll the messages from the topic and the polling interval is configurable. Refer to the documentation [1] for more information on this.
[1]-https://docs.wso2.com/display/EI660/Kafka+Inbound+Protocol
I resolved the issue with separate wso2 consumer Integration project. Its getting issues while creating the inbound endpoint type as kafka.
I have a web server which receives some events as post request , I want to process using Kafka Streams. Which Source connector can I use to achieve this ?
A Source Connector reads data from a source (your server, in this case), which would be GET requests...
If you want POST requests from Connect (consuming Kafka then sending requests), that would be a Sink Connector, and has nothing to do with Kafka Streams
relevant: https://github.com/llofberg/kafka-connect-rest
If that doesn't meet your needs, you can write your own Connector
I'm developing a Kafka Sink connector on my own. My deserializer is JSONConverter. However, when someone send a wrong JSON data into my connector's topic, I want to omit this record and send this record to a specific topic of my company.
My confuse is: I can't find any API for me to get my Connect's bootstrap.servers.(I know it's in the confluent's etc directory but it's not a good idea to write hard code of the directory of "connect-distributed.properties" to get the bootstrap.servers)
So question, is there another way for me to get the value of bootstrap.servers conveniently in my connector program?
Instead of trying to send the "bad" records from a SinkTask to Kafka, you should instead try to use the dead letter queue feature that was added in Kafka Connect 2.0.
You can configure the Connect runtime to automatically dump records that failed to be processed to a configured topic acting as a DLQ.
For more details, see the KIP that added this feature.
I have a rest based application deployed in server(tomcat) ,
Every request comes to server it takes 1 second of time to serve, now I have a issue, sometimes Server receive more request then it is capable of serving which making server non responsive. Now I was thinking if I can store the requests in a queue so that server can pull request and serve that request and handle the pick time issue.
Now I was thinking can Kafka be helpful for this, if yes any pointer where I can start.
You can use Kafka (or any other messaging system for this ex- ActiveMQ, RabbitMQ etc).
When WebService receives request, add request (with all details required to process it) in Kafka queue (using Kafka message producer details)
Separate service (having Kafka consumer details) will read from topic(queue) and process it.
In case need to send message to client when request is processed, server can push information to client using WebSocket (Or client can poll for request status however this need request status endpoint and will cause load on that endpoint).
Apache Kafka would be helpful in your case. If you use a Kafka broker, it will allow you to face with a peak of requests. The requests will be stored in a queue as you mentionned and be treated by your server at its own speed.
As your are using tomcat, I guess you developped your server in Java. Apache Kafka propose a Java API which is quite easy to use.
I am using an eclipse paho client to send mqtt messages to a mosquitto broker. The payload is in JSON format. The broker parses the payload and updates it with some more information and publishes to a subscriber. The subscriber in my case is a BDAS/SPARK instance.
the client, broker and SPARK instance are running in different boxes.
in this sequence i want to integrate my mosquitto broker with mongoDB..i tried to do it with nodered but not successful.
Could you point me to some suggestion on this ?
If mosquitto is not a hard requirement, you could also use a MQTT broker with a plugin system (like HiveMQ) to do this. You can see an example architecture in this blog post.
It should be pretty trivial to write such a plugin for HiveMQ, you only need to implement the OnPublishCallback (see the documentation)
An example where you can start is e.g. This Github Repository.