Offset and Partition - Kafka Sink Processor - apache-kafka

We are using kafka Topology forward to send a record to a kafka topic.
We were using a separate producer to publish the message earlier and we were able to grab the offset and partition of the message. Now we want to replace it with Context.forward.
How can we get the offset and partition of the record sent by Kafka Sink Processor using context.forward

Publish message to topic in producer.type=sync mode. when you call send() method, it will return all the details you are looking.

Related

Consume Kafka Message using poll mode

I am new to Kafka, and I am using Kafka 1.0.
I read the kafka messages using pull mode, that is, I periodically poll()ing the Kafka topic for new messages, but I didn't write the offset back to Kafka.
I would ask how kafka knows that which offsets I have consumed or what is the mechanism that Kafka remembers the progress(Kafka offset)
Every consumer group maintains its offset per topic partitions. Since v0.9 the information of committed offsets for every consumer group is stored in an internal topic called (by default) __consumer_offsets (prior to v0.9 this information was stored on Zookeeper). When the offset manager receives an OffsetCommitRequest, it appends the request to a special compacted Kafka topic named __consumer_offsets. Finally, the offset manager will send a successful offset commit response to the consumer, only when all the replicas of the offsets topic receive the offsets.

Kafka to Kafka -> reading source kafka topic multiple times

I new to Kafka and i have a configuration where i have a source Kafka topic which has messages with a default retention for 7 days. I have 3 brokers with 1 partition and 1 replication.
When i try to consume messages from source Kafka topic and to my target Kafka topic i was able to consume messages in the same order. Now my question is if i am trying to reprocess all the messages from my source Kafka and consume in ,y Target Kafka i see that my Target Kafka is not consuming any messages. I know that duplication should be avoided but lets say i have a scenario where i have 100 messages in my source Kafka and i am expecting 200 messages in my target Kafka after running it twice. But i am just getting 100 messages in my first run and my second run returns nothing.
Can some one please explain why this is happening and what is the functionality behind it ?
Kafka consumer reads data from a partition of a topic. One consumer can read from one partition at one time only.
Once a message has been read by the consumer, it can't be re-read again. Let me first explain the current offset. When we call a poll method, Kafka sends some messages to us. Let us assume we have 100 records in the partition. The initial position of the current offset is 0. We made our first call and received 100 messages. Now Kafka will move the current offset to 100.
The current offset is a pointer to the last record that Kafka has already sent to a consumer in the most recent poll and that has been committed. So, the consumer doesn't get the same record twice because of the current offset. Please go through the following diagram and URL for complete understanding.
https://www.learningjournal.guru/courses/kafka/kafka-foundation-training/offset-management/

Fetch message with specified key using Kafka Listener vs Kafka Consumer

I use a SpringBoot App to produce or consume/listen to Kafka
messages.
I produce a message in the topic and consume/listen to the
specific message by comparing the messageKey and then send the
consumed message for further processing.
I am stuck with what approach will be better suited to my requirement to get specific message i.e. Kafka Listener or Kafka Consumer what ?
KafkaListener is a Spring specific concept that wraps the Kafka Consumer API.
There is no way to get a message by a particular offset given the key. You must calculate the partition, then scan the entire offset

Sometimes unable to update the kafka offset from producer and consumer

We could able to insert data to kafka using kafka producer API and offset got incremented as well could able to consume data using kafka consumer API and consumer offset got incremented.
But sometimes, Offset was not working properly in the process of push and consume data from kafka. Please help me out.

How do I archive Kafka message?

How can be archive Kafka messages like if we want to send a particular message to some topic so we archive that message and send to that topic or some other topic?
Can we replay that message to the topic?
Can we replay based on particular offset?
send a particular message to some topic
That is a regular producer send request
we archive that message
Kafka persists data for a configurable amount of time on its own. Default is a week
send to that topic or some other topic
Again, a producer request can send to a specific topic. Kafka Streams or MirrorMaker can send to other topics, if needed
replay that message to the topic
Not clear... Replay from where? Generally, read the message and produce to a topic
replay based on particular offset
Yes, you can consume from a given TopicPartition + Offset coordinate within Kafka