How do I archive Kafka message? - apache-kafka

How can be archive Kafka messages like if we want to send a particular message to some topic so we archive that message and send to that topic or some other topic?
Can we replay that message to the topic?
Can we replay based on particular offset?

send a particular message to some topic
That is a regular producer send request
we archive that message
Kafka persists data for a configurable amount of time on its own. Default is a week
send to that topic or some other topic
Again, a producer request can send to a specific topic. Kafka Streams or MirrorMaker can send to other topics, if needed
replay that message to the topic
Not clear... Replay from where? Generally, read the message and produce to a topic
replay based on particular offset
Yes, you can consume from a given TopicPartition + Offset coordinate within Kafka

Related

Does Kafka send notification fo all consumers that new message has arrived?

Imagine there are 1 producer and 1000 consumers with same group id (the producer and consumer group id is not the same).
When message arrived and Kafka place it to the queue, does Kafka send notification to 1000 consumers that new message has been arrived (and after that, only one consumer takes the message)?
If it's not, how does consumer know that new message has been arrived?
Does Kafka send notification fo all consumers that new message has arrived?
Kafka works differently.
In the case you describe, all consumers would regularly try to fetch messages from the brokers. Thus, it's not necessary for the broker to send a notification, because the consumer pro-actively poll for new messages anyway.

Removing one message from a topic in Kafka

I'm new at using Kafka and I have one question. Can I delete only ONE message from a topic if I know the topic, the offset and the partition? And if not is there any alternative?
It is not possible to remove a single message from a Kafka topic, even though you know its partition and offset.
Keep in mind, that Kafka is not a key/value store but a topic is rather an append-only(!) log that represents a stream of data.
If you are looking for alternatives to remove a single message you may
Have your consumer clients ignore that message
Enable log compaction and send a tompstone message
Write a simple job (KafkaStreams) to consume the data, filter out that one message and produce all messages to a new topic.

How to make fanout in Apache Kafka?

I need to send message for all consumers, but before detect who should get this message, how to do that using Kafka?
Should I use Kafks stream to filter data then send to consumers?
As I know each consumers should be added to unique consumer group, but how to detect in real time, who must receive message ?
Kafka decouples consumer and producer and when you write into a topic, you don't know which consumers might read the data.
Thus, in Kafka you never "send a message to a consumer", you just write the message into a topic and that's it.
Consumers just read from topics.

Offset and Partition - Kafka Sink Processor

We are using kafka Topology forward to send a record to a kafka topic.
We were using a separate producer to publish the message earlier and we were able to grab the offset and partition of the message. Now we want to replace it with Context.forward.
How can we get the offset and partition of the record sent by Kafka Sink Processor using context.forward
Publish message to topic in producer.type=sync mode. when you call send() method, it will return all the details you are looking.

Can i consume based on specific condition in Kafka?

I'm writing a msg in to Kafka and consuming it in the other end.
Doing some process in it and writing it back to another Kafka topic.
I want to know which message response is for which request..
currently decided to capture the offset id from consumer side then write in the response and read the response payload and decide the same.
For this approach we need to read each message is there any other way we can consume based on consumer config condition?
Consumers can only read the whole topic. You can only skip messages via seek() but there is no conditions that you can evaluate on the broker to filter messages.
You will need to consume the whole topic an process/filter in the client.