Kafka replication of consumer group offsets - apache-kafka

I have a project to migrate from and old (and unmonitored, etc..) Kafka cluster to multiple new fresh clusters.
For that we need that consumer group should be synced at some point (if not every consumer will restart from its own strategy).
Mirrormaker2 seems to works well replicating topics, but on consumer grouos it's very erratic to say the least. My temporary conclusion is that mm2 only "replicate" empty consumer group (which can make sense?), but inspecting the offset show me that they are not sync.
I tried to dig into config/log and I never find something that works reliably. (btw the documentation of mirrormaker2 is just horrible, we don't what config stanza is supposed to be applied, and reading the log is kindla messy).
I also try confluent replicator; which is also replicating well topics, but not consumer group.
So first question:
is someone have already succeed making work mm2 with consumer group translation/replication?
B Plan:
Assuming I can shut my consumers for some time, assuming also I have the exact same topics/part topology so I don't translation.
I have in the idea to find a way to take manually a snapshot of each consumer group involved, and recreate them on the destination cluster.
It should be possible (after all mm2 is supposed to do it); and after all it's just a matter of manually committing offset by partition. kafka-topics propose an option to reset offset to earliest or a time (maybe its sufficient we will see) ; but I would have prefer a tool to create the exact same consumer group in the dest cluster.
Is someone have already done that? and know an tool to do it?

Related

Apache Kafka messages got archived - is it possible to retrieve the messages

We are using Apache Kafka and we process more than 30 million messages per day. We have an retention policy of "30" days. However, before 30 days, our messages got archived.
Is there a way we could retrieve the deleted messages?
Is it possible to reset the "start index" to older index to retrieve the data through query?
What other options do we have?
If we have "disk backup", could we use that for retrieving the data?
Thank You
I'm assuming your messages got deleted by the Kafka cluster here.
In general, no - if the records got deleted due to duration / size related policies, then they have been removed.
Theoretically, if you have access to backups you might move the Kafka data-log files to server directory, but the behaviour is undefined. Trying that with a fresh cluster with infinite size/time policies (so nothing gets purged instantly) might work and let you consume again.
In my experience, until the general availability of Tiered Storage, there is no free/easy way to recover data (via the Kafka Consumer protocol).
For example, you can use some Kafka Connect Sink connector to write to some external, more persistent storage. Then, would you want to write a job that scrapes that data? Sure, you could have a SQL database table of STRING topic, INT timestamp, BLOB key, BLOB value, and maybe track "consumer offsets" separately from that? If you use that design, then Kafka doesn't really seem useful, as you'd be reimplementing various parts of it when you could've just added more storage to the Kafka cluster.
Is it possible to reset the "start index" to older index to retrieve the data through query?
That is what auto.offset.reset=earliest will do, or kafka-consumer-groups --reset-offsets --to-earliest
have "disk backup", could we use that
With caution, maybe. For example - you can copy old broker log segments into a server, but then there aren't any tools I know of that will retroactively discover the new "low watermark" of each topic (maybe the broker finds this upon restart, I haven't tested). You'd need to copy this data for each broker manually, I believe, since the replicas wouldn't know about old segments (again, maybe after a full cluster restart, they might).
Plus, the consumer offsets would already be reading way past that data, unless you stop all consumers and reset them.
I'm also not sure what happens if you had gaps in the segment files. E.g. your current oldest segment is N and you copy N-2, but not N-1... You might then run into an error or the consumer will simply apply auto.offset.reset policy, and seek to the next available offset or to the very end of the topic

Manually setting Kafka consumer offset

In our project, there are Active Kafka servers( PR) and Passive Kafka servers (DR), both Kafka brokers are configured with the same group name, topic name and partition in our project. When switching from PR to DR the _consumer_offsets is manually set on DR.
My question here is, would the Kafka consumer be able to seamlessly consume the messages from where it was last read?
When replicating messages across 2 clusters, it's not possible to ensure offsets stay in sync.
For example, if a topic exists for a little while on the Active cluster the log start offset for some partitions may not be 0 (some records have been deleted by the retention policies). Hence when replicating this topic, offsets between both clusters will not be the same. This can also happen when messages are lost or duplicated as you can't have exactly once semantics when replicating between 2 clusters.
So you can't just replicate the __consumer_offsets topic, this will not work. Consumer group positions have to be explicitly "translated" between both clusters. While it's possible to reset them "manually" by directly committing, it's not recommended as finding the new positions is not obvious.
Instead, you should use a replication tool that supports "offset translation" to ensure consumers can seamlessly switch from 1 cluster to the other.
For example, Mirror Maker 2, the official Kafka tool for mirroring clusters, supports offset translation via RemoteClusterUtils. You can find the details in the KIP.
In itself, relying on the fact that both clusters will have the same offset is faulty.
Offset - is relative characteristic. It's not a part of a message. It's literally a position inside the file. And those files, Kafka log files, also rotate and have retentions. There's no guarantee that those log files are identical at any given point in time. Kafka doesn't claim to solve such an issue.
Besides, it's tricky to solve from CAP point of view.
And it's also pointless unless you want strict physical replication.
That's why Kafka multi-cluster tools are usually about logical replication. I have not used Mirror Maker(MM) but I've used Replicator(which is a more advanced commercial tool by Confluent) and it has a feature for that called, who would have guessed, just like the MM one - offset translation.
Replicator does the following:
Reads the consumer offset and timestamp information from the
__consumer_timestamps topic in the origin cluster to understand a consumer group’s progress.
Translates the committed offsets in the
origin datacenter to the corresponding offsets in the destination
datacenter.
Writes the translated offsets to the __consumer_offsets
topic in the destination cluster, as long as no consumers in that
group are connected to the destination cluster.
Note: You do need to add an interceptor to your Kafka Consumers.

Get latest values from a topic on consumer start, then continue normally

We have a Kafka producer that produces keyed messages in a very high frequency to topics whose retention time = 10 hours. These messages are real-time updates and the used key is the ID of the element whose value has changed. So the topic is acting as a changelog and will have many duplicate keys.
Now, what we're trying to achieve is that when a Kafka consumer launches, regardless of the last known state (new consumer, crashed, restart, etc..), it will somehow construct a table with the latest values of all the keys in a topic, and then keeps listening for new updates as normal, keeping the minimum load on Kafka server and letting the consumer do most of the job. We tried many ways and none of them seems the best.
What we tried:
1 changelog topic + 1 compact topic:
The producer sends the same message to both topics wrapped in a transaction to assure successful send.
Consumer launches and requests the latest offset of the changelog topic.
Consumes the compacted topic from beginning to construct the table.
Continues consuming the changelog since the requested offset.
Cons:
Having duplicates in compacted topic is a very high possibility even with setting the log compaction frequency the highest possible.
x2 number of topics on Kakfa server.
KSQL:
With KSQL we either have to rewrite a KTable as a topic so that consumer can see it (Extra topics), or we will need consumers to execute KSQL SELECT using to KSQL Rest Server and query the table (Not as fast and performant as Kafka APIs).
Kafka Consumer API:
Consumer starts and consumes the topic from beginning. This worked perfectly, but the consumer has to consume the 10 hours change log to construct the last values table.
Kafka Streams:
By using KTables as following:
KTable<Integer, MarketData> tableFromTopic = streamsBuilder.table("topic_name", Consumed.with(Serdes.Integer(), customSerde));
KTable<Integer, MarketData> filteredTable = tableFromTopic.filter((key, value) -> keys.contains(value.getRiskFactorId()));
Kafka Streams will create 1 topic on Kafka server per KTable (named {consumer_app_id}-{topic_name}-STATE-STORE-0000000000-changelog), which will result in a huge number of topics since we a big number of consumers.
From what we have tried, it looks like we need to either increase the server load, or the consumer launch time. Isn't there a "perfect" way to achieve what we're trying to do?
Thanks in advance.
By using KTables, Kafka Streams will create 1 topic on Kafka server per KTable, which will result in a huge number of topics since we a big number of consumers.
If you are just reading an existing topic into a KTable (via StreamsBuilder#table()), then no extra topics are being created by Kafka Streams. Same for KSQL.
It would help if you could clarify what exactly you want to do with the KTable(s). Apparently you are doing something that does result in additional topics being created?
1 changelog topic + 1 compact topic:
Why were you thinking about having two separate topics? Normally, changelog topics should always be compacted. And given your use case description, I don't see a reason why it should not be:
Now, what we're trying to achieve is that when a Kafka consumer launches, regardless of the last known state (new consumer, crashed, restart, etc..), it will somehow construct a table with the latest values of all the keys in a topic, and then keeps listening for new updates as normal [...]
Hence compaction would be very useful for your use case. It would also prevent this problem you described:
Consumer starts and consumes the topic from beginning. This worked perfectly, but the consumer has to consume the 10 hours change log to construct the last values table.
Note that, to reconstruct the latest table values, all three of Kafka Streams, KSQL, and the Kafka Consumer must read the table's underlying topic completely (from beginning to end). If that topic is NOT compacted, this might indeed take a long time depending on the data volume, topic retention settings, etc.
From what we have tried, it looks like we need to either increase the server load, or the consumer launch time. Isn't there a "perfect" way to achieve what we're trying to do?
Without knowing more about your use case, particularly what you want to do with the KTable(s) once they are populated, my answer would be:
Make sure the "changelog topic" is also compacted.
Try KSQL first. If this doesn't satisfy your needs, try Kafka Streams. If this doesn't satisfy your needs, try the Kafka Consumer.
For example, I wouldn't use the Kafka Consumer if it is supposed to do any stateful processing with the "table" data, because the Kafka Consumer lacks built-in functionality for fault-tolerant stateful processing.
Consumer starts and consumes the topic from beginning. This worked
perfectly, but the consumer has to consume the 10 hours change log to
construct the last values table.
During the first time your application starts up, what you said is correct.
To avoid this during every restart, store the key-value data in a file.
For example, you might want to use a persistent map (like MapDB).
Since you give the consumer group.id and you commit the offset either periodically or after each record is stored in the map, the next time your application restarts it will read it from the last comitted offset for that group.id.
So the problem of taking a lot of time occurs only initially (during first time). So long as you have the file, you don't need to consume from beginning.
In case, if the file is not there or is deleted, just seekToBeginning in the KafkaConsumer and build it again.
Somewhere, you need to store this key-values for retrieval and why cannot it be a persistent store?
In case if you want to use Kafka streams for whatever reason, then an alternative (not as simple as the above) is to use a persistent backed store.
For example, a persistent global store.
streamsBuilder.addGlobalStore(Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(topic), keySerde, valueSerde), topic, Consumed.with(keySerde, valueSerde), this::updateValue);
P.S: There will be a file called .checkpoint in the directory which stores the offsets. In case if the topic is deleted in the middle you get OffsetOutOfRangeException. You may want to avoid this, perhaps by using UncaughtExceptionHandler
Refer to https://stackoverflow.com/a/57301986/2534090 for more.
Finally,
It is better to use Consumer with persistent file rather than Streams for this, because of simplicity it offers.

Kafka: multiple consumers in the same group

Let's say I have a Kafka cluster with several topics spread over several partitions. Also, I have a cluster of applications act as clients for Kafka. Each application in that cluster has a client that is subscribed to a same set of topics, which is identical over the whole cluster. Also, each of these clients share same Kafka group ID.
Now, speaking of commit mode. I really do not want to specify offset manually, but I do not want to use autocommit either, because I need to do some handing after I receive my data from Kafka.
With this solution, I expect to occur "same data received by different consumers" problem, because I do not specify offset before I do reading (consuming), and I read data concurrently from different clients.
Now, my question: what are the solutions to get rid of multiple reads? Several options coming to my mind:
1) Exclusive (sequential) Kafka access. Until one consumer committed read, no other consumers access Kafka.
2) Somehow specify offset before each reading. I do not even know how to do that with assumption that read might fail (and offset will not be committed) - we gonna need some complicated distributed offset storage.
I'd like to ask people experienced with Kafka to recommend something to achieve behavior I need.
Every partition is consumed only by one client - another client with the same group ID won't get access to that partition, so concurrent reads won't occur...

How customer offsets are maintained in mirrored cluster in Kafka?

Lets say I have two Kafka clusters and I am using mirror maker to mirror the topics from one cluster to another. I understand consumer has an embedded producer to commit offsets to __consumer-offset topic in Kafka cluster. I need to know what will happen if primary Kafka cluster goes down? Do we sync the __consumer-offset topic as well? Because secondary cluster could have different number of brokers and other settings, I think.
Please tell how Kafka mirrored cluster takes care of consumer offset?
Does auto.offset.reset setting play a role here?
Update
Since Apache Kafka 2.7.0, MirrorMaker is able to replicate committed offsets. Cf https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
Original Answer
Mirror maker does not replicate offsets.
Furthermore, auto.offset.reset is completely unrelated to this, because it's a consumer setting that defines where a consumer should start reading for the case, that no valid committed offset is found at startup.
The reason for not mirroring offsets is basically, that they can be meaningless on the mirror cluster because it is not guaranteed, that the messages will have the same offsets in both cluster.
Thus, in fail over case, you need to figure out something "smart" by yourself. One way would be to remember the metadata timestamp of you last processed record. This allows you to "seek" based on timestamp on the mirror cluster to find an approximate offset there. (You will need Kafka 0.10 for this.)