Kafka Streams - time window close delay? - apache-kafka

I'm new to Kafka Streams.
I use the suppress method of KTable in order to handle only the final result of a window like this:
myStream
.windowedBy(TimeWindows.of(Duration.ofSeconds(10)).grace(Duration.ofMillis(500)))
.aggregate(new Aggregation(),
(k, v, a) -> a, // Disabled the actual aggregation in order to eliminate possiblities of latency
materialized.withLoggingDisabled())
.suppress(untilWindowCloses(Suppressed.BufferConfig.unbounded()))
.toStream().peek((k, v) -> log.info("delay " + (System.currentTimeMillis() - k.window().endTime().toEpochMilli())));
This way I get a log with the delay every 10 seconds with the difference between the window end and the actual time the peek was called.
I would exect a very small number here, since this code practically does nothing...
Nevertheless, I get delay of 4-20 sec for each key/window.
I use a thread per task (5 threads for this topic).
Can someone please point out if I'm doing anything wrong?
Thanks!
Edit:
Using VirtualVM shows that ~99% of the time consumed over sun.nio.ch.SelectorImpl.select(). This means AFAIU, that the process is "idle" most of the time.
Edit:
It seems that changing "commit.interval.ms" (which was by default 30000) reduced the delay drastically.
Still delay has peaks of event 15 seconds, so the problem isn't solved yet...

Related

How or When is time window of Kafka Streams expired?

I use kafka streams in my application, I have a question about time window in aggregate function.
KTable<Windowed<String>, PredictReq> windowedKtable = views.map(new ValueMapper()).groupByKey().windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(1)))
.aggregate(new ADInitializer(), new ADAggregator(),Materialized.with(Serdes.String(), ReqJsonSerde));
KStream<Windowed<String>, Req> filtered = windowedKtable.toStream().transform(new ADTransformerFilter());
KStream<String, String> result = filtered.transform(new ADTransformerTrans());
I aggregrate data in 1 minute window and then transform to get the final aggregate result and do a second transform.
Here is some sample data:
msg1: 10:00:00 comes, msg2: 10:00:20 comes, msg3: 10:01:10 comes
window starts from 10:00:00 to 10:01:00 for example.
I found the windows is not expired until msg3 comes! (because the following transform is not executed until msg3 comes.)
This is not what I want.
Is there something wrong in my testing? If this is truth, how to change it?
I see...
Kafka streams doesn't have the window expired concept. so I use window in message to check whether the window is changed, so I must wait message from next window.
If next message is not come, I don't know the window is finished.

Offer to queue with some initial display

I want to offer to queue a string sent in load request after some initial delay say 10 seconds.
If the subsequent request is made with some short interval delay(1 second) then everything works fine, but if it is made continuously like from a script then there is no delay.
Here is the sample code.
def load(randomStr :String) = Action { implicit request =>
Source.single(randomStr)
.delay(10 seconds, DelayOverflowStrategy.backpressure)
.map(x =>{
println(x)
queue.offer(x)
})
.runWith(Sink.ignore)
Ok("")
}
I am not entirely sure that this is the correct way of doing what you want. There are some things you need to reconsider:
a delayed source has an initial buffer capacity of 16 elements. You can increase this with addAttributes(initialBuffer)
In your case the buffer cannot actually become full because every time you provide just one element.
Who is the caller of the Action? You are defining a DelayOverflowStrategy.backpressure strategy but is the caller able to handle this?
On every call of the action you are creating a Stream consisting of one element, how is the backpressure here helping? It is applied on the stream processing and not on the offering to the queue

Health Reporting Stuck

In a simple stateless service I am attempting to report health. As a test I am simply flipping between OK and Warning states on every iteration of my loop in RunAsync (it has a sleep interval of 15secs). The code looks like this:
// report warning on odd iterations
HealthState state = ((++iterations % 2) != 0) ? HealthState.Warning : HealthState.Ok;
HealthInformation health = new HealthInformation("ServiceCode", "Iteration", state);
Partition.ReportInstanceHealth(health);
I am logging the state on each iteration of the loop and the log shows it flipping back and forth. But in the SF Explorer it is stuck on Ok, never switching to Warning (I have a refresh interval of 5secs in SFExplorer).
What am I doing wrong here?
Try specifying HealthInformation.SequenceNumber with an incremental value for every state change.
Looks like health reporting is significantly slower than I would expect. Working on that 15sec interval was not enough time for it to show the alternating health state. Moving to a 30sec interval seems to have resolved it.

Rx Extensions - Proper way to use delay to avoid unnecessary observables from executing?

I'm trying to use delay and amb to execute a sequence of the same task separated by time.
All I want is for a download attempt to execute some time in the future only if the same task failed before in the past. Here's how I have things set up, but unlike what I'd expect, all three downloads seem to execute without delay.
Observable.amb([
Observable.catch(redditPageStream, Observable.empty()).delay(0 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(30 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(90 * 1000),
# Observable.throw(new Error('Failed to retrieve reddit page content')).delay(10000)
# Observable.create(
# (observer) ->
# throw new Error('Failed to retrieve reddit page content')
# )
]).defaultIfEmpty(Observable.throw(new Error('Failed to retrieve reddit page content')))
full code can be found here. src
I was hoping that the first successful observable would cancel out the ones still in delay.
Thanks for any help.
delay doesn't actually stop the execution of what ever you are doing it just delays when the events are propagated. If you want to delay execution you would need to do something like:
redditPageStream.delaySubscription(1000)
Since your source is producing immediately the above will delay the actual subscription to the underlying stream to effectively delay when it begins producing.
I would suggest though that you use one of the retry operators to handle your retry logic though rather than rolling your own through the amb operator.
redditPageStream.delaySubscription(1000).retry(3);
will give you a constant retry delay however if you want to implement the linear backoff approach you can use the retryWhen() operator instead which will let you apply whatever logic you want to the backoff.
redditPageStream.retryWhen(errors => {
return errors
//Only take 3 errors
.take(3)
//Use timer to implement a linear back off and flatten it
.flatMap((e, i) => Rx.Observable.timer(i * 30 * 1000));
});
Essentially retryWhen will create an Observable of errors, each event that makes it through is treated as a retry attempt. If you error or complete the stream then it will stop retrying.

Future map's (waiting) execution context. Stops execution with FixedThreadPool

// 1 fixed thread
implicit val waitingCtx = scala.concurrent.ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
// "map" will use waitingCtx
val ss = (1 to 1000).map {n => // if I change it to 10 000 program will be stopped at some point, like locking forever
service1.doServiceStuff(s"service ${n}").map{s =>
service1.doServiceStuff(s"service2 ${n}")
}
}
Each doServiceStuff(name:String) takes 5 seconds. doServiceStuff does not have implicit ex:Execution context as parameter, it uses its own ex context inside and does Future {blocking { .. }} on it.
In the end program prints:
took: 5.775849753 seconds for 1000 x 2 stuffs
If I change 1000 to 10000 in, adding even more tasks : val ss = (1 to 10000) then program stops:
~17 027 lines will be printed (out of 20 000). No "ERROR" message
will be printed. No "took" message will be printed
**And will not be processing any futher.
But if I change exContext to ExecutionContext.fromExecutor(null: Executor) (global one) then in ends in about 10 seconds (but not normally).
~17249 lines printed
ERROR: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
took: 10.646309398 seconds
That's the question
: Why with fixed ex-context pool it stops without messaging, but with global ex-context it terminates but with error and messaging?
and sometimes.. it is not reproducable.
UPDATE: I do see "ERROR" and "took" if I increase pool from 1 to N. Does not matter how hight N is - it sill will be the ERROR.
The code is here: https://github.com/Sergey80/scala-samples/tree/master/src/main/scala/concurrency/apptmpl
and here, doManagerStuff2()
I think I have an idea of what's going on. If you squint enough, you'll see that map duty is extremely lightweight: just fire off a new future (because doServiceStuff is a Future). I bet the behavior will change if you switch to flatMap, which will actually flatten the nested future and thus will wait for second doServiceStuff call to complete.
Since you're not flattening out these futures, all your awaits downstream are awaiting on a wrong thing, and you are not catching it because here you're discarding whatever Service returns.
Update
Ok, I misinterpreted your question, although I still think that that nested Future is a bug.
When I try your code with both executors with 10000 task I do get OutOfMemory when creating threads in ForkJoin execution context (i.e. for service tasks), which I'd expect. Did you use any specific memory settings?
With 1000 tasks they both do complete successfully.