How the offset should be configured for a new consumer group for an existing topic when source connectors can't be paused - apache-kafka

We have an existing topic where the data gets published by JDBC source connector using the mode increment + timestamp (The source connector uses increment+timestamp (https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/index.html#incremental-query-modes)
We have existing consumer groups which consumes data from some existing topics. Now we are introducing a new consumer group (call this group k) which should consume data from the same existing topics and should write to database. As a first step, we have an initial data migration workflow to take a dump of source database and copy the dump to destination database before starting consuming messages from existing topic.
Now when the consumer group starts, I am wondering what should be the offset it should start with?
One option is to use latest. But the problem is that existing source connectors would be publishing data to existing topics when initial data migration is being done for this new consumer group. In our case we have 10s of tables to be migrated and there could be a gap where the table dump was taken but still some changes are getting done to the source database and so data will get added to topics. So, there is a chance that we may miss to process some records.
We don't have the option to pause the source connectors which would solve the problem for us.
If we use offset earliest we will end up processing all the old data from kafka topic which is not required as we have done an initial data migration.
We want to maintain only one source connector regardless of the number of consumer groups.
I was going through kafka consumer APIs like seek which takes timestamp. I can note down the time before initial data migration and call consumer.seek once the consumer group has started and partitions are assigned. But I couldn't find any docs saying that the timestamp is GMT based or something else. Is it ok to use this API by passing the time which is number of milliseconds elapsed from epoch?

If I understand this sentence correctly: "If we use offset latest we might lose some data as source connectors might have written some data to the topic during initial data migration" the topic will end up having some data from initial loads and CDC data mixed up, so there is no offset that clearly distinct this. Therefore, you will not get far setting any particular offset.
I see the following options:
Have your consumer group K filtering out initial load data and read from earliest
Produce the initial load data to a dedicated topic
If possible perform the initial load outside of business hours so that no CDC data is flowing (maybe over week-end or bank holidays)

Related

Kafka data reads and offset management with sink

What happens when the consumer reads the data from kafka but fails to write into sink. Lets say, I read the data from kafka and applied some transformation on data and finally storing the final result into Database. If everything is perfectly working my final result will be stored in Database. But lets say for some reason my Database isn't available. what happens with the data that i read from kafka? When I restart my application, can I read the same data again since I failed to store it in sink? or will the kafka marks this data as read and will not allow me to read this data?
can you also tell me what this property is used for - enable.auto.commit=true?
There's a part of the metadata in Kafka called consumer offsets. Each message has a unique offset - an integer value that continually increases for each message.
So, in the scenario you've described:
If, you've committed the offset BEFORE writing to the database then you will not be able to read those messages again.
But, if you commit the offset AFTER writing to the database then you will be able to re-read those messages.
enable.auto.commit=true as the name suggests will automatically commit consumer offsets after a certain time interval defined by auto.commit.interval.ms parameter - which by default is 5000 ms (5 seconds). So, as you can probably imagine that if these default values are used, then the offsets will be committed in 5 seconds regardless of whether they have landed in the destination or not.
So, you would basically need to control these through your code and change the enable.auto.commit to false if you'd like to ensure guaranteed delivery.
Hope this helps!

Best practice at the moment of processing data with dependencies in Kafka?

We are developing an app that takes data from different sources and once the data is available we process it, put it together and then proceed to move it to a different topic.
In our case we have 3 topics and each of these topics are going to bring data which have a relation with data from a different topic, in this case, every entity generated could be or not received at the same time (or a short period of time), and this is when the problem comes because there is a need for joining this 3 entities into one before we proceed with the moving to the topic.
Our idea was to create a separate topic which is going to contain all the data that is not processed yet and then have a separate thread that is going to check that topic in fixed intervals and also check the dependencies of this topic to be available, if they are available then we delete this entity from this separate topic, if not, we kept this entity there until it gets resolved.
At the end of all this explanation my question is if is it reasonable to do it in this way or there are other good practices or strategies that Kafka provides to solve this kind of scenarios?
Kafka messages could get clean after some time based on retention policy so you need to store message somewhere:
I can see below option but always every problem have may approach and solution:
Processed all message and forward "not processed message" to other topic say A
Kafka Processor API to consume messages from topic A and store into the state store
Schedule a punctuate() method with a time interval
Iterate all messages stored in the state stored.
check dependency if available delete the message from the state store and processed it or publish back to original topics to get processed again.
You can refer below link for reference
https://kafka.apache.org/10/documentation/streams/developer-guide/processor-api.html

Get latest values from a topic on consumer start, then continue normally

We have a Kafka producer that produces keyed messages in a very high frequency to topics whose retention time = 10 hours. These messages are real-time updates and the used key is the ID of the element whose value has changed. So the topic is acting as a changelog and will have many duplicate keys.
Now, what we're trying to achieve is that when a Kafka consumer launches, regardless of the last known state (new consumer, crashed, restart, etc..), it will somehow construct a table with the latest values of all the keys in a topic, and then keeps listening for new updates as normal, keeping the minimum load on Kafka server and letting the consumer do most of the job. We tried many ways and none of them seems the best.
What we tried:
1 changelog topic + 1 compact topic:
The producer sends the same message to both topics wrapped in a transaction to assure successful send.
Consumer launches and requests the latest offset of the changelog topic.
Consumes the compacted topic from beginning to construct the table.
Continues consuming the changelog since the requested offset.
Cons:
Having duplicates in compacted topic is a very high possibility even with setting the log compaction frequency the highest possible.
x2 number of topics on Kakfa server.
KSQL:
With KSQL we either have to rewrite a KTable as a topic so that consumer can see it (Extra topics), or we will need consumers to execute KSQL SELECT using to KSQL Rest Server and query the table (Not as fast and performant as Kafka APIs).
Kafka Consumer API:
Consumer starts and consumes the topic from beginning. This worked perfectly, but the consumer has to consume the 10 hours change log to construct the last values table.
Kafka Streams:
By using KTables as following:
KTable<Integer, MarketData> tableFromTopic = streamsBuilder.table("topic_name", Consumed.with(Serdes.Integer(), customSerde));
KTable<Integer, MarketData> filteredTable = tableFromTopic.filter((key, value) -> keys.contains(value.getRiskFactorId()));
Kafka Streams will create 1 topic on Kafka server per KTable (named {consumer_app_id}-{topic_name}-STATE-STORE-0000000000-changelog), which will result in a huge number of topics since we a big number of consumers.
From what we have tried, it looks like we need to either increase the server load, or the consumer launch time. Isn't there a "perfect" way to achieve what we're trying to do?
Thanks in advance.
By using KTables, Kafka Streams will create 1 topic on Kafka server per KTable, which will result in a huge number of topics since we a big number of consumers.
If you are just reading an existing topic into a KTable (via StreamsBuilder#table()), then no extra topics are being created by Kafka Streams. Same for KSQL.
It would help if you could clarify what exactly you want to do with the KTable(s). Apparently you are doing something that does result in additional topics being created?
1 changelog topic + 1 compact topic:
Why were you thinking about having two separate topics? Normally, changelog topics should always be compacted. And given your use case description, I don't see a reason why it should not be:
Now, what we're trying to achieve is that when a Kafka consumer launches, regardless of the last known state (new consumer, crashed, restart, etc..), it will somehow construct a table with the latest values of all the keys in a topic, and then keeps listening for new updates as normal [...]
Hence compaction would be very useful for your use case. It would also prevent this problem you described:
Consumer starts and consumes the topic from beginning. This worked perfectly, but the consumer has to consume the 10 hours change log to construct the last values table.
Note that, to reconstruct the latest table values, all three of Kafka Streams, KSQL, and the Kafka Consumer must read the table's underlying topic completely (from beginning to end). If that topic is NOT compacted, this might indeed take a long time depending on the data volume, topic retention settings, etc.
From what we have tried, it looks like we need to either increase the server load, or the consumer launch time. Isn't there a "perfect" way to achieve what we're trying to do?
Without knowing more about your use case, particularly what you want to do with the KTable(s) once they are populated, my answer would be:
Make sure the "changelog topic" is also compacted.
Try KSQL first. If this doesn't satisfy your needs, try Kafka Streams. If this doesn't satisfy your needs, try the Kafka Consumer.
For example, I wouldn't use the Kafka Consumer if it is supposed to do any stateful processing with the "table" data, because the Kafka Consumer lacks built-in functionality for fault-tolerant stateful processing.
Consumer starts and consumes the topic from beginning. This worked
perfectly, but the consumer has to consume the 10 hours change log to
construct the last values table.
During the first time your application starts up, what you said is correct.
To avoid this during every restart, store the key-value data in a file.
For example, you might want to use a persistent map (like MapDB).
Since you give the consumer group.id and you commit the offset either periodically or after each record is stored in the map, the next time your application restarts it will read it from the last comitted offset for that group.id.
So the problem of taking a lot of time occurs only initially (during first time). So long as you have the file, you don't need to consume from beginning.
In case, if the file is not there or is deleted, just seekToBeginning in the KafkaConsumer and build it again.
Somewhere, you need to store this key-values for retrieval and why cannot it be a persistent store?
In case if you want to use Kafka streams for whatever reason, then an alternative (not as simple as the above) is to use a persistent backed store.
For example, a persistent global store.
streamsBuilder.addGlobalStore(Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(topic), keySerde, valueSerde), topic, Consumed.with(keySerde, valueSerde), this::updateValue);
P.S: There will be a file called .checkpoint in the directory which stores the offsets. In case if the topic is deleted in the middle you get OffsetOutOfRangeException. You may want to avoid this, perhaps by using UncaughtExceptionHandler
Refer to https://stackoverflow.com/a/57301986/2534090 for more.
Finally,
It is better to use Consumer with persistent file rather than Streams for this, because of simplicity it offers.

Is there any way to ensure that duplicate records are not inserted in kafka topic?

I have been trying to implement a queuing mechanism using kafka where I want to ensure that duplicate records are not inserted into topic created.
I found that iteration is possible in consumer. Is there any way by which we can do this in producer thread as well?
This is known as exactly-once processing.
You might be interested in the first part of Kafka FAQ that describes some approaches on how to avoid duplication on data production (i.e. on producer side):
Exactly once semantics has two parts: avoiding duplication during data
production and avoiding duplicates during data consumption.
There are two approaches to getting exactly once semantics during data
production:
Use a single-writer per partition and every time you get a network
error check the last message in that partition to see if your last
write succeeded
Include a primary key (UUID or something) in the
message and deduplicate on the consumer.
If you do one of these things, the log that Kafka hosts will be
duplicate-free. However, reading without duplicates depends on some
co-operation from the consumer too. If the consumer is periodically
checkpointing its position then if it fails and restarts it will
restart from the checkpointed position. Thus if the data output and
the checkpoint are not written atomically it will be possible to get
duplicates here as well. This problem is particular to your storage
system. For example, if you are using a database you could commit
these together in a transaction. The HDFS loader Camus that LinkedIn
wrote does something like this for Hadoop loads. The other alternative
that doesn't require a transaction is to store the offset with the
data loaded and deduplicate using the topic/partition/offset
combination.
I think there are two improvements that would make this a lot easier:
Producer idempotence could be done automatically and much more cheaply
by optionally integrating support for this on the server.
The existing
high-level consumer doesn't expose a lot of the more fine grained
control of offsets (e.g. to reset your position). We will be working
on that soon

What is the frequency with which partition offsets are queried by driver using the direct Kafka API in Spark Streaming?

Are the offsets queried for every batch interval or at a different frequency?
When you use the term offsets, I'm assuming you're meaning the offset and not the actual message. Looking through documentation I was able to find two references to the direct approach.
The first one, from Apache Spark Docs
Instead of using receivers to receive data, this approach periodically queries Kafka for the latest offsets in each topic+partition, and accordingly defines the offset ranges to process in each batch. When the jobs to process the data are launched, Kafka’s simple consumer API is used to read the defined ranges of offsets from Kafka (similar to read files from a file system).
This makes it seem like there are independent actions. Offsets are queried from Kafka, and then assigned to process in a specific batch. And querying offsets from Kafka can return offsets that cover multiple Spark batch jobs.
The second one, a blog post from databricks
Instead of receiving the data continuously using Receivers and storing it in a WAL, we simply decide at the beginning of every batch interval what is the range of offsets to consume. Later, when each batch’s jobs are executed, the data corresponding to the offset ranges is read from Kafka for processing (similar to how HDFS files are read).
This one makes it seem more like each batch interval itself fetches a range of offsets to consume. Then when running actually fetches those messages from Kafka.
I have never worked with Apache Spark, I mainly use Apache Storm + Kafka, but since the first doc suggests they can happen at different intervals I would assume they can happen at different intervals, and the blog post just doesn't mention it because it just doesn't get into the technical details.