How to create a Bulk consumer in Spring webflux and Kafka - apache-kafka

I need to poll kafka and process events in bulk. In Reactor kafka, since its a steaming API, I am getting events as stream. Is there a way to combine and get a fixed max size of events.
This is what I doing currently.
final Flux<Flux<ConsumerRecord<String, String>>> receive = KafkaReceiver.create(eventReceiverOptions)
.receiveAutoAck();
receive
.concatMap(r -> r)
.doOnEach(listSignal -> log.info("got one message"))
.map(consumerRecords -> consumerRecords.value())
.collectList()
.flatMap(strings -> {
log.info("Read messages of size {}", strings.size());
return processBulkMessage(strings)
.doOnSuccess(aBoolean -> log.info("Processed records"))
.thenReturn(strings);
}).subscribe();
But code just hangs after collectList and never goes to the last flatMap.
Thanks In advance.

You just do a "flattening" with your plain .concatMap(r -> r) therefore you fully eliminate what is there is a batching originally built by that receiveAutoAck(). To have a stream of lists for your processBulkMessage() to process consider to move all the batch logic into that concatMap():
.concatMap(batch -> batch
.doOnEach(listSignal -> log.info("got one message"))
.map(ConsumerRecord::value)
.collectList())
.flatMap(strings -> {

Related

Stale ktable records when joining kstream with ktable created by kstream aggregation

I'm trying to implement the event sourcing pattern with kafka streams in the following way.
I'm in a Security service and handle two use cases:
Register User, handling RegisterUserCommand should produce UserRegisteredEvent.
Change User Name, handling ChangeUserNameCommand should produce UserNameChangedEvent.
I have two topics:
Command Topic, 'security-command'. Every command is keyed and the key is user's email. For example:
foo#bar.com:{"type": "RegisterUserCommand", "command": {"name":"Alex","email":"foo#bar.com"}}
foo#bar.com:{"type": "ChangeUserNameCommand", "command": {"email":"foo#bar.com","newName":"Alex1"}}
Event Topic, 'security-event'. Every record is keyed by user's email:
foo#bar.com:{"type":"UserRegisteredEvent","event":{"email":"foo#bar.com","name":"Alex", "version":0}}
foo#bar.com:{"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex1","version":1}}
Kafka Streams version 2.8.0
Kafka version 2.8
The implementation idea can be expressed in the following topology:
commandStream = builder.stream("security-command");
eventStream = builder.stream("security-event",
Consumed.with(
...,
new ZeroTimestampExtractor()
/*always returns 0 to get the latest version of snapshot*/));
// build the snapshot to get the current state of the user.
userSnapshots = eventStream.groupByKey()
.aggregate(() -> new UserSnapshot(),
(key /*email*/, event, currentSnapshot) -> currentSnapshot.apply(event));
// join commands with latest snapshot at the time of the join
commandWithSnapshotStream =
commandStream.leftJoin(
userSnapshots,
(command, snapshot) -> new CommandWithUserSnapshot(command, snapshot),
joinParams
);
// handle the command given the current snapshot
resultingEventStream = commandWithSnapshotStream.flatMap((key /*email*/, commandWithSnapshot) -> {
var newEvents = commandHandler(commandWithSnapshot.command(), commandWithSnapshot.snapshot());
return Arrays.stream(newEvents )
.map(e -> new KeyValue<String, DomainEvent>(e.email(), e))
.toList();
});
// append events to events topic
resultingEventStream.to("security-event");
For this topology, I'm using EOS exactly_once_beta.
A more explicit version of this topology:
KStream<String, Command<DomainEvent[]>> commandStream =
builder.stream(
commandTopic,
Consumed.with(Serdes.String(), new SecurityCommandSerde()));
KStream<String, DomainEvent> eventStream =
builder.stream(
eventTopic,
Consumed.with(
Serdes.String(),
new DomainEventSerde(),
new LatestRecordTimestampExtractor() /*always returns 0 to get the latest snapshot of the snapshot.*/));
// build the snapshots ktable by aggregating all the current events for a given user.
KTable<String, UserSnapshot> userSnapshots =
eventStream.groupByKey()
.aggregate(
() -> new UserSnapshot(),
(email, event, currentSnapshot) -> currentSnapshot.apply(event),
Materialized.with(
Serdes.String(),
new UserSnapshotSerde()));
// join command stream and snapshot table to get the stream of pairs <Command, UserSnapshot>
Joined<String, Command<DomainEvent[]>, UserSnapshot> commandWithSnapshotJoinParams =
Joined.with(
Serdes.String(),
new SecurityCommandSerde(),
new UserSnapshotSerde()
);
KStream<String, CommandWithUserSnapshot> commandWithSnapshotStream =
commandStream.leftJoin(
userSnapshots,
(command, snapshot) -> new CommandWithUserSnapshot(command, snapshot),
commandWithSnapshotJoinParams
);
var resultingEventStream = commandWithSnapshotStream.flatMap((key /*email*/, commandWithSnapshot) -> {
var command = commandWithSnapshot.command();
if (command instanceof RegisterUserCommand registerUserCommand) {
var handler = new RegisterUserCommandHandler();
var events = handler.handle(registerUserCommand);
// multiple events might be produced when a command is handled.
return Arrays.stream(events)
.map(e -> new KeyValue<String, DomainEvent>(e.email(), e))
.toList();
}
if (command instanceof ChangeUserNameCommand changeUserNameCommand) {
var handler = new ChangeUserNameCommandHandler();
var events = handler.handle(changeUserNameCommand, commandWithSnapshot.userSnapshot());
return Arrays.stream(events)
.map(e -> new KeyValue<String, DomainEvent>(e.email(), e))
.toList();
}
throw new IllegalArgumentException("...");
});
resultingEventStream.to(eventTopic, Produced.with(Serdes.String(), new DomainEventSerde()));
Problems I'm getting:
Launching the stream app on a command topic with existing records:
foo#bar.com:{"type": "RegisterUserCommand", "command": {"name":"Alex","email":"foo#bar.com"}}
foo#bar.com:{"type": "ChangeUserNameCommand", "command": {"email":"foo#bar.com","newName":"Alex1"}}
Outcome:
1. Stream application fails when processing the ChangeUserNameCommand, because the snapshot is null.
2. The events topic has a record for successful registration, but nothing for changing the name:
/*OK*/foo#bar.com:{"type":"UserRegisteredEvent","event":{"email":"foo#bar.com","name":"Alex", "version":0}}
Thoughts:
When processing the ChangeUserNameCommand, the snapshot is missing in the aggregated KTable, userSnapshots. Restarting the application succesfully produces the following record:
foo#bar.com: {"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex1","version":1}}
Tried increasing the max.task.idle.ms to 4 seconds - no effect.
Launching the stream app and producing a set of ChangeUserNameCommand commands at a time (fast).
Producing:
// Produce to command topic
foo#bar.com:{"type": "RegisterUserCommand", "command": {"name":"Alex","email":"foo#bar.com"}}
// event topic outcome
/*OK*/ foo#bar.com:{"type":"UserRegisteredEvent","event":{"email":"foo#bar.com","name":"Alex", "version":0}}
// Produce at once to command topic
foo#bar.com:{"type": "ChangeUserNameCommand", "command": {"email":"foo#bar.com","newName":"Alex1"}}
foo#bar.com:{"type": "ChangeUserNameCommand", "command": {"email":"foo#bar.com","newName":"Alex2"}}
foo#bar.com:{"type": "ChangeUserNameCommand", "command": {"email":"foo#bar.com","newName":"Alex3"}}
// event topic outcome
/*OK*/foo#bar.com: {"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex1","version":1}}
/*NOK*/foo#bar.com: {"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex2","version":1}}
/*NOK*/foo#bar.com: {"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex3","version":1}}
Thoughts:
'ChangeUserNameCommand' commands are joined with a stale version of snapshot (pay attention to the version attribute).
The expected outcome would be:
foo#bar.com: {"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex1","version":1}}
foo#bar.com: {"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex2","version":2}}
foo#bar.com: {"type":"UserNameChangedEvent","event":{"email":"foo#bar.com","name":"Alex3","version":3}}
Tried increasing the max.task.idle.ms to 4 seconds - no effect, setting the cache_max_bytes_buffering to 0 has no effect.
What am I missing in building such a topology? I expect that every command to be processed on the latest version of the snapshot. If I produce the commands with a few seconds delay between them, everything works as expected.
I think you missed change-log recovery part for the Tables. Read this to understand what happens with change-log recovery.
For tables, it is more complex because they must maintain additional
information—their state—to allow for stateful processing such as joins
and aggregations like COUNT() or SUM(). To achieve this while also
ensuring high processing performance, tables (through their state
stores) are materialized on local disk within a Kafka Streams
application instance or a ksqlDB server. But machines and containers
can be lost, along with any locally stored data. How can we make
tables fault tolerant, too?
The answer is that any data stored in a table is also stored remotely
in Kafka. Every table has its own change stream for this purpose—a
built-in change data capture (CDC) setup, we could say. So if we have
a table of account balances by customer, every time an account balance
is updated, a corresponding change event will be recorded into the
change stream of that table.
Also keep in mind, Restart a Kafka stream application should not process previously processed events. For that you need to commit offset of the message after processed it.
Found the root cause. Not sure if it is by design or a bug, but a stream task will wait only once per processing cycle for data in other partitions.
So if 2 records from command topic were read first, the stream task will wait max.task.idle.ms, allowing the poll() phase to happen, when processing the first command record. After it is processed, during processing the second one, the stream task will not allow polling to get newly generated events that resulted from first command processing.
In kafka 2.8, the code that is responsible for this behavior is in StreamTask.java. IsProcessable() is invoked at the beginning of processing phase. If it returns false, this will lead to repeating the polling phase.
public boolean isProcessable(final long wallClockTime) {
if (state() == State.CLOSED) {
return false;
}
if (hasPendingTxCommit) {
return false;
}
if (partitionGroup.allPartitionsBuffered()) {
idleStartTimeMs = RecordQueue.UNKNOWN;
return true;
} else if (partitionGroup.numBuffered() > 0) {
if (idleStartTimeMs == RecordQueue.UNKNOWN) {
idleStartTimeMs = wallClockTime;
}
if (wallClockTime - idleStartTimeMs >= maxTaskIdleMs) {
return true;
// idleStartTimeMs is not reset to default, RecordQueue.UNKNOWN, value,
// therefore the next time when the check for all buffered partitions is done, `true` is returned, meaning that the task is ready to be processed.
} else {
return false;
}
} else {
// there's no data in any of the topics; we should reset the enforced
// processing timer
idleStartTimeMs = RecordQueue.UNKNOWN;
return false;
}
}

Threads still alive after subscription

We are currently migrating from RxJava2 to Project Reactor. To parallize work, we are creating a new Thread via Schedulers.newThread() for every parallel HTTP request. We cannot reuse Threads because Spring's SecurityContext is bound to ThreadLocal.
Recently we ran into the error, that after some time the JVM runs into OutOfMemory Exceptions, because the created Threads were not disposed leading to thousands of parked Threads, with RxJava the Threads we diposed after a successful HTTP request.
For RxJava the code would be the following, showing no active additional Threads when having a Breakpoint in the last line.
Observable<String> deferred = Observable.fromCallable(() -> "1")
.subscribeOn(io.reactivex.schedulers.Schedulers.newThread());
Observable<String> deferred2 = Observable.fromCallable(() -> "2")
.subscribeOn(io.reactivex.schedulers.Schedulers.newThread());
System.out.println("obs: " + deferred.blockingSingle());
System.out.println("obs2: " + deferred2.blockingSingle());
However for Project Reactor, both Threads are still alive after both stdouts.
Mono<String> mono1 = Mono.fromSupplier(() -> "1").subscribeOn(Schedulers.newSingle("single"));
Mono<String> mono2 = Mono.fromSupplier(() -> "1").subscribeOn(Schedulers.newSingle("single"));
System.out.println("Mono1: " + mono1.block());
System.out.println("Mono2: " + mono2.block());
A solution for this would be to tear down the scheduler manually in onFinally:
Scheduler newSingle = Schedulers.newSingle("single");
Mono<String> doFinally = Mono.defer(() -> Mono.fromSupplier(() -> "1")).subscribeOn(newSingle).doFinally(s -> {
newSingle.dispose();
});
However, is this really necessary, or is there a way to establish the same behavior as in RxJava2?

Aggregate a large amount of data using Kafka streams

I try to aggregate a large amount of data using time windows of different sizes using Kafka Streams.
I increased the cache size to 2 GB, but when I set the window size in 1 hour I get the CPU load of 100% and the application starts to slow down.
My code looks like this:
val tradeStream = builder.stream<String, Trade>(configuration.topicNamePattern, Consumed.with(Serdes.String(), JsonSerde(Trade::class.java)))
tradeStream
.groupBy(
{ _, trade -> trade.pair },
Serialized.with(JsonSerde(TokensPair::class.java), JsonSerde(Trade::class.java))
)
.windowedBy(TimeWindows.of(windowDuration).advanceBy(windowHop).until(windowDuration))
.aggregate(
{ Ticker(windowDuration) },
{ _, newValue, aggregate -> aggregate.add(newValue) },
Materialized.`as`<TokensPair, Ticker>(storeByPairs)
.withKeySerde(JsonSerde(TokensPair::class.java))
.withValueSerde(JsonSerde(Ticker::class.java))
)
.toStream()
.filter { tokensPair, _ -> filterFinishedWindow(tokensPair.window(), windowHop) }
.map { tokensPair, ticker -> KeyValue(
TickerKey(ticker.tokensPair!!, windowDuration, Timestamp(tokensPair.window().start())),
ticker.calcPrice()
)}
.to(topicName, Produced.with(JsonSerde(TickerKey::class.java), JsonSerde(Ticker::class.java)))
In addition, before sending the aggregated data to the kafka topic they are filtered by end time of the window in order send to topic just finished window.
Perhaps there are some better approaches for implementing this kind of aggregation?
With out a knowing a bit more of the system it’s hard to diagnose.
How many partitions are present in your cluster ?
How many stream applications are you running ?
Are the stream applications running on the same machine ?
Are you using compression for the payload ?
Does it work for smaller intervals?
Hope that helps.

Observable windowed groupBy leads to OutOfMemoryError

I'm trying to figure out how to use Observable.groupBy to limit the number of elements pushed by key over a time frame. I end up with the following construct:
create(emitter -> {
while (true) {
publishedMeter.mark();
emitter.onNext(new Object());
}
})
.window(1000L, TimeUnit.MILLISECONDS)
.flatMap(window -> window.groupBy(o -> o.hashCode() % 10_000).flatMapMaybe(Observable::lastElement))
.subscribe(e -> receivedMeter.mark());
While subscribe's onNext callback is called a few thousand times, which I think should mean that flatMapMaybe does properly subscribe to all GroupedObservableSource. After a short while one of the thread inside RxComputationThreadPool but I don't understand what I'm missing

How to send final kafka-streams aggregation result of a time windowed KTable?

What I'd like to do is this:
Consume records from a numbers topic (Long's)
Aggregate (count) the values for each 5 sec window
Send the FINAL aggregation result to another topic
My code looks like this:
KStream<String, Long> longs = builder.stream(
Serdes.String(), Serdes.Long(), "longs");
// In one ktable, count by key, on a five second tumbling window.
KTable<Windowed<String>, Long> longCounts =
longs.countByKey(TimeWindows.of("longCounts", 5000L));
// Finally, sink to the long-avgs topic.
longCounts.toStream((wk, v) -> wk.key())
.to("long-counts");
It looks like everything works as expected, but the aggregations are sent to the destination topic for each incoming record. My question is how can I send only the final aggregation result of each window?
In Kafka Streams there is no such thing as a "final aggregation". Windows are kept open all the time to handle out-of-order records that arrive after the window end-time passed. However, windows are not kept forever. They get discarded once their retention time expires. There is no special action as to when a window gets discarded.
See Confluent documentation for more details: http://docs.confluent.io/current/streams/
Thus, for each update to an aggregation, a result record is produced (because Kafka Streams also update the aggregation result on out-of-order records). Your "final result" would be the latest result record (before a window gets discarded). Depending on your use case, manual de-duplication would be a way to resolve the issue (using lower lever API, transform() or process())
This blog post might help, too: https://timothyrenner.github.io/engineering/2016/08/11/kafka-streams-not-looking-at-facebook.html
Another blog post addressing this issue without using punctuations: http://blog.inovatrend.com/2018/03/making-of-message-gateway-with-kafka.html
Update
With KIP-328, a KTable#suppress() operator is added, that will allow to suppress consecutive updates in a strict manner and to emit a single result record per window; the tradeoff is an increase latency.
From Kafka Streams version 2.1, you can achieve this using suppress.
There is an example from the mentioned apache Kafka Streams documentation that sends an alert when a user has less than three events in an hour:
KGroupedStream<UserId, Event> grouped = ...;
grouped
.windowedBy(TimeWindows.of(Duration.ofHours(1)).grace(ofMinutes(10)))
.count()
.suppress(Suppressed.untilWindowCloses(unbounded()))
.filter((windowedUserId, count) -> count < 3)
.toStream()
.foreach((windowedUserId, count) -> sendAlert(windowedUserId.window(), windowedUserId.key(), count));
As mentioned in the update of this answer, you should be aware of the tradeoff. Moreover, note that suppress() is based on event-time.
I faced the issue, but I solve this problem to add grace(0) after the fixed window and using Suppressed API
public void process(KStream<SensorKeyDTO, SensorDataDTO> stream) {
buildAggregateMetricsBySensor(stream)
.to(outputTopic, Produced.with(String(), new SensorAggregateMetricsSerde()));
}
private KStream<String, SensorAggregateMetricsDTO> buildAggregateMetricsBySensor(KStream<SensorKeyDTO, SensorDataDTO> stream) {
return stream
.map((key, val) -> new KeyValue<>(val.getId(), val))
.groupByKey(Grouped.with(String(), new SensorDataSerde()))
.windowedBy(TimeWindows.of(Duration.ofMinutes(WINDOW_SIZE_IN_MINUTES)).grace(Duration.ofMillis(0)))
.aggregate(SensorAggregateMetricsDTO::new,
(String k, SensorDataDTO v, SensorAggregateMetricsDTO va) -> aggregateData(v, va),
buildWindowPersistentStore())
.suppress(Suppressed.untilWindowCloses(unbounded()))
.toStream()
.map((key, value) -> KeyValue.pair(key.key(), value));
}
private Materialized<String, SensorAggregateMetricsDTO, WindowStore<Bytes, byte[]>> buildWindowPersistentStore() {
return Materialized
.<String, SensorAggregateMetricsDTO, WindowStore<Bytes, byte[]>>as(WINDOW_STORE_NAME)
.withKeySerde(String())
.withValueSerde(new SensorAggregateMetricsSerde());
}
Here you can see the result