I have a Event class.
class Event {
String destination;
}
The destination property is the name of Kafka topic.
How can I publish a message to the destination.
I can't put the destination in application.properties because the destination was set in program.
Since you determine the destination dynamically, you would need BinderAwareChannelResolver in your application.
Something similar to router sink application. You can check
Related
In My application Kafka topics are dedicated to a domain(can't change that) and multiple different types of events (1 Event = 1 Avro schema message) related to that domain being produced by different micro-services in that one topic.
Now I have only one consumer app in which I should be able to apply different schema dynamically (by inspecting event name in message) and transform in appropriate pojo object(generated by specific Avro schema) for further event specific actions.
Whatever example I find on net is all about single schema type message consumer so need some help.
Related blog post: https://www.confluent.io/blog/multiple-event-types-in-the-same-kafka-topic/
How to configure the consumer:
https://docs.confluent.io/platform/current/schema-registry/serdes-develop/serdes-avro.html#avro-deserializer
https://github.com/openweb-nl/kafka-graphql-examples/blob/307bbad6f10e4aaa6b797a3bbe3b6620d3635263/graphql-endpoint/src/main/java/nl/openweb/graphql_endpoint/service/AccountCreationService.java#L47
https://github.com/openweb-nl/kafka-graphql-examples/blob/307bbad6f10e4aaa6b797a3bbe3b6620d3635263/graphql-endpoint/src/main/resources/application.yml#L20
You need the generated Avro classes on the classpath. Most likely by adding a dependency.
I want to use Kafka connect in order to read events from Kafka topic and write them into RabbitMQ.
In order to do so, I need to use the RabbitMQ sink.
Each of the events coming from Kafka should be sent to a different queue (based on some field in the event structure), which means a different routing key should be used. As far as I know, there's an option to configure a static routing key in the sink configuration. Is there any option to configure it dynamically based on the events to achieve the required behavior?
Is there any way to make my Kafka Stream application automatically read from the newly created topic?
Even if the topic is created while the stream application is already running?
Something like having a wildcard in topic name like this:
KStream<String, String> rawText = builder.stream("topic-input-*");
Why I need this?
Right now, I've multiple clients sending data(all with the same schema) to their own topic and my stream application reads from those topics. Then my application does some transformation and writes the result to a single topic.
Although all of the clients could write to the same topic, an unbehaving client could also write on behalf of someone else. So I've created individual topics for each client. The problem is, whenever a new client comes, I create the new topic and set the ACL for them with a script but that is not enough. I also have to stop my streaming application, edit the code, add the new topic, compile it, package it, put it on the server and run it again!
Kafka Streams supports patter subscription:
builder.stream(Pattern.compile("topic-input-*"));
(I hope the syntax is right; not sure from the top of my head... But the point is, instead of passing in a String you can user an overload of the stream() method that takes a pattern.)
Currently I have a sink connector which gets data from topic A and sends its to an external service.
Now I have a use case when based on some logic I should send it to topic B instead of the service.
And this logic based on the response of the target service,that will return response based on the data.
So because the data should be sent to the target system every time I couldnt use the stream api.
Is that feasible somehow?
Or should I add a kafka producer manually to my sink? If so is there any drawback?
The first option, is to create a custom Kafka Connect Single Message Transform that will implement the desired logic and possibly use ExtractTopic as well (depending on how your custom smt looks like).
The second option is to build your own consumer. For example:
Step 1: Create one more topic on top of topic A
Create one more topic, say topic_a_to_target_system
Step 2: Implement your custom consumer
Implement a Kafka Consumer that consumes all the messages from topic topic_a.
At this point, you need to instantiate a Kafka Producer and based on the logic, decide whether the topic needs to be forwarded to topic_B or to the target system (topic_a_to_target_system).
Step 3: Start Sink connector on topic_a_to_target_system
Finally start your sink connector so that it sinks the data from topic topic_a_to_target_system to your target system.
I'm using spooldir as Flume source and sink to kafka, is there anyway that i can transfer both the content and filename to kafka.
For example, filename is test.txt and content is hello world, need to display
hello world
test.txt
Some sources allow adding the name of the file as header of the Flume event created with the input data; that's the case of the spooldir source.
And some sinks allow configuring the serializer to be used for writting the data, such as the HDFS one; in that case, I've read there exists a header_and_text serializer (never tested it). Nevertheless, the Kafka source does not expose parameters for doing that.
So, IMHO your options are two:
Configure the spooldir for adding the above mentioned header about the file name, and develop a custom interceptor in charge of modifying the data with such a header value. Interceptors are pieces of code running at the output of the sources that "intercept" the events and modify them before they are effectively put into the Flume channel.
Modify the data you send to the spooldir source by adding a first data line about the file name.