Proper way to programmatically stop an Alpakka Kafka stream - scala

We are trying to use Akka Streams with Alpakka Kafka to consume a stream of events in a service. For handling event processing errors we are using Kafka autocommit and more than one queue. For example, if we have the topic user_created, which we want to consume from a products service, we also create user_created_for_products_failed and user_created_for_products_dead_letter. These two extra topics are coupled to a specific Kafka consumer group. If an event fails to be processed, it goes to the failed queue, where we try to consume again in five minutes--if it fails again it goes to dead letters.
On deployment we want to ensure that we don't lose events. So we are trying to stop the stream before stopping the application. As I said, we are using autocommit, but all of these events that are "flying" are not processed yet. Once the stream and application are stopped, we can deploy the new code and start the application again.
After reading the documentation, we have seen the KillSwitch feature. The problem that we are seeing in it is that the shutdown method returns Unit instead Future[Unit] as we expect. We are not sure that we won't lose events using it, because in tests it looks like it goes too fast to be working properly.
As a workaround, we create an ActorSystem for each stream and use the terminate method (which returns a Future[Terminate]). The problem with this solution is that we don't think that creating an ActorSystem per stream will scale well, and terminate takes a lot of time to resolve (in tests it takes up to one minute to shut down).
Have you faced a problem like this? Is there a faster way (compared to ActorSystem.terminate) to stop a stream and ensure that all the events that the Source has emitted have been processed?

From the documentation (emphasis mine):
When using external offset storage, a call to Consumer.Control.shutdown() suffices to complete the Source, which starts the completion of the stream.
val (consumerControl, streamComplete) =
Consumer
.plainSource(consumerSettings,
Subscriptions.assignmentWithOffset(
new TopicPartition(topic, 0) -> offset
))
.via(businessFlow)
.toMat(Sink.ignore)(Keep.both)
.run()
consumerControl.shutdown()
Consumer.control.shutdown() returns a Future[Done]. From its Scaladoc description:
Shutdown the consumer Source. It will wait for outstanding offset commit requests to finish before shutting down.
Alternatively, if you're using offset storage in Kafka, use Consumer.Control.drainAndShutdown, which also returns a Future. Again from the documentation (which contains more information about what drainAndShutdown does under the covers):
val drainingControl =
Consumer
.committableSource(consumerSettings.withStopTimeout(Duration.Zero), Subscriptions.topics(topic))
.mapAsync(1) { msg =>
business(msg.record).map(_ => msg.committableOffset)
}
.toMat(Committer.sink(committerSettings))(Keep.both)
.mapMaterializedValue(DrainingControl.apply)
.run()
val streamComplete = drainingControl.drainAndShutdown()
The Scaladoc description for drainAndShutdown:
Stop producing messages from the Source, wait for stream completion and shut down the consumer Source so that all consumed messages reach the end of the stream. Failures in stream completion will be propagated, the source will be shut down anyway.

Related

Reconsume Kafka Message that failed during processing due to DB error

I am new to Kafka and would like to seek advice on what is the best practice to handle such scenario.
Scenario:
I have a spring boot application that has a consumer method that is listening for messages via the #KafkaListner annotation. Once an incoming message has occurred, the consumer method will process the message, which simply performs database updates to different tables via JdbcTemplate.
If the updates to the tables are successful, I will manually commit the message by calling the acknowledge() method. If the database update fails, instead of calling the acknowledge() method, I will call the nack() method with a given duration (E.g. 10 seconds) such that the message will reappear again to be consumed.
Things to note
I am not concerned with the ordering of the messages. Whatever event comes I just have to consume and process it, that's all.
I am only given a topic (no retryable topic and no dead letter topic)
Here is the problem
If I do the above method, my consumer becomes inconsistent. Let's say if I call the nack() method with a duration of 1min, meaning to say after 1 min, the same message will reappear.
Within this 1 min, there could "x" number of incoming messages to be consumed and processed. The observation made was none of these messages are getting consumed and processed.
What I want to know
Hence, I hope someone will advise me what I am doing wrongly and what is the best practice / way to handle such scenarios.
Thanks!
Records are always received in order; there is no way to defer the current record until later, but continue to process other records after this one when consuming from a single topic.
Kafka topics are a linear log and not a queue.
You would need to send it to another topic; the #RetryableTopic (non-blocking retrties) feature is specifically designed for this use case.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retry-topic
You could also increase the container concurrency so at least you could continue to process records from other partitions.

(Spring) Kafka appears to consume newly produced messages out of order

Situation:
We have a Spring Boot / Spring Kafka application that is reading from a Kafka topic with a single partition. There is a single instance of the application running and it has a single-threaded KafkaMessageListenerContainer (not Concurrent). We have a single consumer group.
We want to manage offsets ourselves based on committing to a transactional database. At startup, we read initial offsets from our database and seek to that offset and begin reading older messages. (For example with an empty database, we would start at offset 0.) We do this via implementing ConsumerRebalanceListener and seek()ing in that callback. We pause() the KafkaMessageListenerContainer prior to starting it so that we don't read any messages prior to the ConsumerRebalanceListener being invoked (then we resume() the container inside the ConsumerRebalanceListener.onPartitionsAssigned() callback). We acknowledge messages manually as they are consumed.
Issue:
While in the middle of reading these older messages (1000s of messages and 10s of seconds/minutes into the reading), a separate application produces messages into the same topic and partition we're reading from.
We observe that these newly produced messages are consumed immediately, intermingled with the older messages we're in the process of reading. So we observe message offsets that jump in this single consumer thread: from the basically sequential offsets of the older messages to ones that are from the new messages that were just produced, and then back to the older, sequential ones.
We don't see any errors in reading messages or anything that would trigger retries or anything like that. The reads of newer messages happen in the main thread as do the reads of older messages, so I don't believe there's another listener container running.
How could this happen? Doesn't this seem contrary to the ordering guarantees Kafka is supposed to provide? How can we prevent this behavior?
Details:
We have the following settings (some in properties, some in code, please excuse the mix):
properties.consumer.isolationLevel = KafkaProperties.IsolationLevel.READ_COMMITTED
properties.consumer.maxPollRecords = 500
containerProps.ackMode = ContainerProperties.AckMode.MANUAL
containerProps.eosMode = ContainerProperties.EOSMode.BETA
spring.kafka.consumer.auto-offset-reset=none
spring.kafka.enable-auto-commit=false
Versions:
Spring Kafka 2.5.5.RELEASE
Kafka 2.5.1
(we could definitely try upgrading if there was a reason to believe this was the result of a bug that was fixed since then.)
I can share some code snippets for any of the above if it's interesting.

How to efficiently produce messages out of a collection to Kafka

In my Scala (2.11) stream application I am consuming data from one queue in IBM MQ and writing it to a Kafka topic that has one partition. After consuming the data from the MQ the message payload gets splitted into 3000 smaller messages that are stored in a Sequence of Strings. Then each of these 3000 messages are send to Kafka (version 2.x) using KafkaProducer.
How would you send those 3000 messages?
I can't increase the number of queues in IBM MQ (not under my control) nor the number of partitions in the topic (ordering of messages is required, and writing a custom partitioner will impact too many consumers of the topic).
The Producer settings are currently:
acks=1
linger.ms=0
batch.size=65536
But optimizing them is probably a question of its own and not part of my current problem.
Currently, I am doing
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}
private lazy val kafkaProducer: KafkaProducer[String, String] = new KafkaProducer[String, String](someProperties)
val messages: Seq[String] = Seq(String1, …, String3000)
for (msg <- messages) {
val future = kafkaProducer.send(new ProducerRecord[String, String](someTopic, someKey, msg))
val recordMetadata = future.get()
}
To me it looks like not the most elegant and most efficient way. Is there a programmatic way to increase throughput?
edit after answer from #radai
Thanks to the answer pointing me to the right direction I had a closer look into the different Producer methods. The book Kafka - The Definitive Guide list these methods:
Fire-and-forget
We send a message to the server and don’t really care if it arrives succesfully or not. Most of the time, it will arrive successfully, since Kafka is highly available and the producer will retry sending messages automatically. However, some messages will get lost using this method.
Synchronous send
We send a message, the send() method returns a Future object, and we use get()
to wait on the future and see if the send() was successful or not.
Asynchronous send
We call the send() method with a callback function, which gets triggered when it
receives a response from the Kafka broker
And now my code looks like this (leaving out error handling and the definition of Callback class):
val asyncProducer = new KafkaProducer[String, String](someProperties)
for (msg <- messages) {
val record = new ProducerRecord[String, String](someTopic, someKey, msg)
asyncProducer.send(record, new compareProducerCallback)
}
asyncProducer.flush()
I have compared all the methods for 10000 very small messages. Here is my measure result:
Fire-and-forget: 173683464ns
Synchronous send: 29195039875ns
Asynchronous send: 44153826ns
To be honest, there is probably more potential to optimize all of them by choosing the right properties (batch.size, linger.ms, ...).
the biggest reason i can see for your code to be slow is that youre waiting on every single send future.
kafka was designed to send batches. by sending one record at a time youre waiting round-trip time for every single record and youre not getting any benefit from compression.
the "idiomatic" thing to do would be send everything, and then block on all the resulting futures in a 2nd loop.
also, if you intend to do this i'd bump linger back up (otherwise your 1st record would result in a batch of size one, slowing you down overall. see https://en.wikipedia.org/wiki/Nagle%27s_algorithm) and call flush() on the producer once your send loop is done.

How to safely skip messages using Lagom Kafka Message Broker API?

We've defined a basic subscriber that skips over failed messages (ie for some business logic reason, we are not going to handle) by throwing an exception and relying on a Akka Streams' stream supervision to resume the Flow:
someLagomService
.someTopic()
.subscribe
.withGroupId("lagom-service")
.atLeastOnce(
Flow[Int]
.mapAsync(1)(el => {
// Exception may occur here or can map to Done
})
.withAttributes(ActorAttributes.supervisionStrategy({
case t =>
Supervision.Resume
})
)
This seems to work fine for basic use cases under very little load, but we have noticed very strange things for larger number of messages (ex: very frequent re-processing of messages, etc).
Digging into the code, we saw that Lagom's broker.Subscriber.atLeastOnce documentation states:
The flow may pull more elements from upstream but it must emit
exactly one Done message for each message that it receives. It must
also emit them in the same order that the messages were received. This
means that the flow must not filter or collect a subset of the
messages, instead it must split the messages into separate streams and
map those that would have been dropped to Done.
Additionally, in the impl of Lagom's KafkaSubscriberActor, we see that the impl of private atLeastOnce essentially unzips the message payload and offset and then rezips then back up after our user flow maps messages to Done.
These two tidbits above seem to imply that by using stream supervisors and skipping elements, we can end up in a situation where the committable offsets no longer zip up evenly with the Dones that are to be produced per Kafka message.
Example: If we stream 1, 2, 3, 4 and map 1, 2, and 4 to Done but throw an exception on 3, we have 3 Dones and 4 committable offsets?
Is this correct / expected? Does this mean we should AVOID using stream supervisors here?
What sorts of behavior can the uneven zipping cause?
What is the recommended approach for error handling when it comes to consuming messages off of Kafka via the Lagom message broker API? Is the right thing to do to map / recover failures to Done?
Using Lagom 1.4.10
Is this correct / expected? Does this mean we should AVOID using
stream supervisors here?
The official API documentations says that
If the Kafka Lagom message broker module is being used, then by
default the stream is automatically restarted when a failure occurs.
So, there is no need to add your own supervisionStrategy to manage error handling. And the stream will be restarted by default and you should not think about "skipped" Done messages.
What sorts of behavior can the uneven zipping cause?
Exactly because of this the documentation says:
This means that the flow must not filter or collect a subset of the
messages
It can under-commit the wrong offset. And on restart, you might get the already processed messages in the form of replay from committed lower offset.
What is the recommended approach for error handling when it comes to
consuming messages off of Kafka via the Lagom message broker API? Is
the right thing to do to map / recover failures to Done?
Lagom is taking care of the exception handling by dropping the message that caused the error and restarting the stream. And map / recover failures to Done won't have any change on this.
You could consider, in case you need to have access to these messages later on, too use Try {} for example, ie not throwing an exception, and collect the messages with errors by sending them to a different topic, this will give you chance to monitor the amount of errors and replay messages that caused the error when the conditions are right, ie the bug is fixed.

Gracefully shut down Flink Kafka Comsumer at run time

I am using FlinkKafkaConsumer010 with Flink 1.2.0, and the problem I am facing is that: Is there a way that I can shut down the entire pipeline programmatically if some scenario is seen?
On possible solution is that I can shut down the kafka consumer source by calling the close() method defined inside of FlinkKafkaConsumer010, then the pipeline with shut down as well. For this approach, I create a list that contains the references to all FlinkKafkaConsumer010 instance that I created at the beginning of the pipeline for the kafka topics. Then during the execution of the pipeline, I have another thread that calls close() of each of the FlinkKafkaConsumer010 in my list. I expect that this should shut down the consumer, but the result is that the consumer is still running.
Can someone shed some light on this or give me some other suggestion on how can I shut down the flink pipeline at runtime programmatically?
Is the scenario that you're trying to respond to based on the input events? If so, I would suggest to have a MapFunction somewhere appropriate in the pipeline, and just deliberately throw an exception to fail the job when some condition is met.
The other alternative is to look at the isEndOfStream method in KeyedDeserializationSchema. Basically, when the condition is met for some event, signal that the stream has ended.
One other option to consider is to let the MapFunction mentioned above be instead a FlatMapFunction, that send an signaling event to the outside world. A separate process external to the Flink job listens to that event, and when received, shutdown the Flink job via the Flink CLI.