Is RxJava Completable's Emitter.onComplete happens-before Observer's callback? - rx-java2

Given the following code, is it guaranteed that System.out.println(v)will print 1? What if I change the io and computation schedulers to other schedulers?
I have checked the source of computation scheduler, it seems use executor's submit method and according to the documentation, submit is happens-before the execution of the actual runnable, so I think in this case, this happens-before relationship is guaranteed, but is this apply to other schedulers?
import io.reactivex.Completable;
import io.reactivex.schedulers.Schedulers;
public class Test {
static int v = 0;
public static void main(String[] args){
Completable.create(e -> {v = 1; e.onComplete();})
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.computation())
.subscribe(() -> System.out.println(v));
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
also, if I assign 1 to v before Completable#create, is this change visible to Completable's body?

Given the following code, is it guaranteed that System.out.println(v) will print 1?
Yes.
If you, however, swapped the order, there is no guarantee:
Completable.create(e -> {e.onComplete(); v = 1;})
What if I change the io and computation schedulers to other schedulers?
All standard schedulers have this guarantee.
but is this apply to other schedulers?
Any asynchronous scheduler is expected to provide this happens-before relationship and the standard ones are guaranteed because of the underlying ExecutorService.
if I assign 1 to v before Completable#create, is this change visible to Completable's body?
subscribeOn is also establishes a happens-before relationship so upon subscription, the v is committed and the body of the create will see the value.

Related

Mutiny - Kafka writes happening sequentially

I am new to Quarkus. I am trying to write a REST endpoint using quarkus reactive that receives an input, does some validation, transforms the input to a list and then writes a message to kafka. My understanding was converting everything to Uni/Multi, would result in the execution happening on the I/O thread in async manner. In, the intelliJ logs, I could see that the code is getting executed in a sequential manner in the executor thread. The kafka write happens in its own network thread sequentially, which is increasing latency.
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Multi<OutputSample> send(InputSample inputSample) {
ObjectMapper mapper = new ObjectMapper();
//deflateMessage() converts input to a list of inputSample
Multi<InputSample> keys = Multi.createFrom().item(inputSample)
.onItem().transformToMulti(array -> Multi.createFrom().iterable(deflateMessage.deflateMessage(array)))
.concatenate();
return keys.onItem().transformToUniAndMerge(payload -> {
try {
return producer.writeToKafka(payload, mapper);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return null;
});
}
#Inject
#Channel("write")
Emitter<String> emitter;
Uni<OutputSample> writeToKafka(InputSample kafkaPayload, ObjectMapper mapper) throws JsonProcessingException {
String inputSampleJson = mapper.writeValueAsString(kafkaPayload);
return Uni.createFrom().completionStage(emitter.send(inputSampleJson))
.onItem().transform(ignored -> new OutputSample("id", 200, "OK"))
.onFailure().recoverWithItem(new OutputSample("id", 500, "INTERNAL_SERVER_ERROR"));
}
I have been on it for a couple of days. Not sure if doing anything wrong. Any help would be appreciated.
Thanks
mutiny as any other reactive library is designed mainly around data flow control.
That being said, at its heart, it will offer a set of capabilities (generally through some operators) to control flow execution and scheduling. This means that unless you instruct munity objects to go asynchronous, they will simply execute in a sequential (old) fashion.
Execution scheduling is controlled using two operators:
runSubscriptionOn: which will cause the code snippet generating the items (which is generally referred to upstream) to execute on a thread from the specified Executor
emitOn: which will cause subscribing code (which is generally referred to downstream) to execute on a thread from the specified Executor
You can then update your code as follows causing the deflation to go asynchronous:
Multi<InputSample> keys = Multi.createFrom()
.item(inputSample)
.onItem()
.transformToMulti(array -> Multi.createFrom()
.iterable(deflateMessage.deflateMessage(array)))
.runSubscriptionOn(Infrastructure.getDefaultExecutor()) // items will be transformed on a separate thread
.concatenate();
EDIT: Downstream on a separate thread
In order to have the full downstream, transformation and writing to Kafka queue done on a separate thread, you can use the emitOn operator as follows:
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Multi<OutputSample> send(InputSample inputSample) {
ObjectMapper mapper = new ObjectMapper();
return Uni.createFrom()
.item(inputSample)
.onItem()
.transformToMulti(array -> Multi.createFrom().iterable(deflateMessage.deflateMessage(array)))
.emitOn(Executors.newFixedThreadPool(5)) // items will be emitted on a separate thread after transformation
.onItem()
.transformToUniAndConcatenate(payload -> {
try {
return producer.writeToKafka(payload, mapper);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return Uni.createFrom().<OutputSample>nothing();
});
}
Multi is intended to be used when you have a source that emits items continuously until it emits a completion event, which is not your case.
From Mutiny docs:
A Multi represents a stream of data. A stream can emit 0, 1, n, or an
infinite number of items.
You will rarely create instances of Multi yourself but instead use a
reactive client that exposes a Mutiny API.
What you are looking for is a Uni<List<OutputSample>> because your API you return 1 and only 1 item with the complete result list.
So what you need is to send each message to Kafka without immediately waiting for their return but collecting the generated Unis and then collecting it to a single Uni.
#POST
public Uni<List<OutputSample>> send(InputSample inputSample) {
// This could be injected directly inside your producer
ObjectMapper mapper = new ObjectMapper();
// Send each item to Kafka and collect resulting Unis
List<Uni<OutputSample>> uniList = deflateMessage(inputSample).stream()
.map(input -> producer.writeToKafka(input, mapper))
.collect(Collectors.toList());
// Transform a list of Unis to a single Uni of a list
#SuppressWarnings("unchecked") // Mutiny API fault...
Uni<List<OutputSample>> result = Uni.combine().all().unis(uniList)
.combinedWith(list -> (List<OutputSample>) list);
return result;
}

Kafka Streams - Transformers with State in Fields and Task / Threading Model

I have a Transformer with a state store that uses punctuate to operate on said state store.
After a few iterations of punctuate, the operation may have finished, so I'd like to cancel the punctuate -- but only for the Task that has actually finished the operation on the partition's respective state store. The punctuate operations for the Tasks that are not done yet should keep running. To that purpose my transformer keeps a reference to the Cancellable returned by schedule().
As far as I can tell, every Task always gets its own isolated Transformer instance and every Task gets its own isolated scheduled punctuate() within that instance (?)
However, since this is effectively state, but not inside a stateStore, I'm not sure how safe this is. For instance, are there certain scenarios in which one transformer instance might be shared across tasks (and therefore absolutely no state must be kept outside of StateStores)?
public class CoolTransformer implements Transformer {
private KeyValueStore stateStore;
private Cancellable taskPunctuate; // <----- Will this lead to conflicts between tasks?
public void init(ProcessorContext context) {
this.store = context.getStateStore(...);
this.taskPunctuate = context.schedule(Duration.ofMillis(...), PunctuationType.WALL_CLOCK_TIME, this::scheduledOperation);
}
private void scheduledOperation(long l) {
stateStore.get(...)
// do stuff...
if (done) {
this.taskPunctuate.cancel(); // <----- Will this lead to conflicts between tasks?
}
}
public KeyValue transform(key, value) {
// do stuff
stateStore.put(key, value)
}
public void close() {
taskPunctuate.cancel();
}
}
You might be able to look into TransformerSupplier, specifically TransformSupplier#get(), this will ensure that ensure we new transformer will be created for when they should be kept independent. Also the Transformers should not share objects, so be careful of this with your Cancellable taskPunctuate. If either of these cases are violated you should see errors like org.apache.kafka.streams.errors.StreamsException: Current node is unknown, ConcurrentModificationException or InstanceAlreadyExistsException.

Does a FlowableOperator inherently supports backpressure?

I've implemented an FlowableOperator as described in the RxJava2 wiki (https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0#operator-targeting-lift) except that I perform some testing in the onNext() operation something like that:
public final class MyOperator implements FlowableOperator<Integer, Integer> {
...
static final class Op implements FlowableSubscriber<Integer>, Subscription {
#Override
public void onNext(Integer v) {
if (v % 2 == 0) {
child.onNext(v * v);
}
}
...
}
}
This operator is part of a chain where I have a Flowable created with a backpressure drop. In essence, it looks almost like this:
Flowable.<Integer>create(emitter -> myAction(), DROP)
.filter(v -> v > 2)
.lift(new MyOperator())
.subscribe(n -> doSomething(n));
I've met the following issue:
backpressure occurs, so doSomething(n) cannot handle the upcoming upstream
items are dropped due to the Backpressure strategy chosen
but doSomething(n) never receives back new item after the drop has been performed and while doSomething(n) was ready to deal with new items
Reading back the excellent blog post http://akarnokd.blogspot.fr/2015/05/pitfalls-of-operator-implementations.html of David Karnok, it's seems that I need to add a request(1) in the onNext() method. But that was with RxJava1...
So, my question is: is this fix enough in RxJava2 to deal with my backpressure issue? Or do my operator have to implement all the stuff about Atomics, drain stuff described in https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0#atomics-serialization-deferred-actions to properly handle my backpressure issue?
Note: I've added the request(1) and it seems to work. But I can't figure out whether it's enough or whether my operator needs the tricky stuff of queue-drain and atomics.
Thanks in advance!
Does a FlowableOperator inherently supports backpressure?
FlowableOperator is an interface that is called for a given downstream Subscriber and should return a new Subscriber that wraps the downstream and modulates the Reactive Streams events passing in one or both directions. Backpressure support is the responsibility of the Subscriber implementation, not this particular functional interface. It could have been Function<Subscriber, Subscriber> but a separate named interface was deemed more usable and less prone to overload conflicts.
need to add a request(1) in the onNext() [...]
But I can't figure out whether it's enough or whether my operator needs the tricky stuff of queue-drain and atomics.
Yes, you have to do that in RxJava 2 as well. Since RxJava 2's Subscriber is not a class, it doesn't have v1's convenience request method. You have to save the Subscription in onSubscribe and call upstream.request(1) on the appropriate path in onNext. For your case, it should be quite enough.
I've updated the wiki with a new section explaining this case explicitly:
https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0#replenishing
final class FilterOddSubscriber implements FlowableSubscriber<Integer>, Subscription {
final Subscriber<? super Integer> downstream;
Subscription upstream;
// ...
#Override
public void onSubscribe(Subscription s) {
if (upstream != null) {
s.cancel();
} else {
upstream = s; // <-------------------------
downstream.onSubscribe(this);
}
}
#Override
public void onNext(Integer item) {
if (item % 2 != 0) {
downstream.onNext(item);
} else {
upstream.request(1); // <-------------------------
}
}
#Override
public void request(long n) {
upstream.request(n);
}
// the rest omitted for brevity
}
Yes you have to do the tricky stuff...
I would avoid writing operators, except if you are very sure what you are doing? Nearly everything can be achieved with the default operators...
Writing operators, source-like (fromEmitter) or intermediate-like
(flatMap) has always been a hard task to do in RxJava. There are many
rules to obey, many cases to consider but at the same time, many
(legal) shortcuts to take to build a well performing code. Now writing
an operator specifically for 2.x is 10 times harder than for 1.x. If
you want to exploit all the advanced, 4th generation features, that's
even 2-3 times harder on top (so 30 times harder in total).
There is the tricky stuff explained: https://github.com/ReactiveX/RxJava/wiki/Writing-operators-for-2.0

How do I concurrently process Reactor Kafka Streams by Topic and Partition with Auto Acknowledgement?

I am trying to achieve concurrent processing of Kafka Topic-Partitions using Reactor Kafka with auto-acknowledgement. The documentation here makes it seem like this is possible:
http://projectreactor.io/docs/kafka/milestone/reference/#concurrent-ordered
The only difference between that and what I am attempting is I am using auto-acknowledgement.
I have the following code (relevant method is receiveAuto):
public class KafkaFluxFactory<K, V> {
private final Map<String, Object> properties;
public KafkaFluxFactory(Map<String, Object> properties) {
this.properties = properties;
}
public Flux<ConsumerRecord<K, V>> receiveAuto(Collection<String> topics, Scheduler scheduler) {
return KafkaReceiver.create(ReceiverOptions.create(properties).subscription(topics))
.receiveAutoAck()
.flatMap(flux -> flux.groupBy(this::extractTopicPartition))
.flatMap(topicPartitionFlux -> topicPartitionFlux.publishOn(scheduler));
}
private TopicPartition extractTopicPartition(ConsumerRecord<K, V> record) {
return new TopicPartition(record.topic(), record.partition());
}
}
When I use this to create a Flux of Consumer Records from Kafka with a parallel Scheduler (Schedulers.newParallel("debug", 10)), I see that they all end up getting processed on the same Thread.
Any thoughts on what I may be doing wrong?
After quite a bit of trial-and-error plus some rethinking of what I want to accomplish I realized I was trying to solve two problems in one bit of code.
The two things I need are:
In-order processing of Kafka Partitions
Ability to parallelize the processing of each partition
In trying to solve both with this piece of code, I was limiting downstream users' abilities to configure the level of parallelization. I therefore changed the method to return a Flux of GroupedFluxes which provides downstream users with the correct granularity of determining what is parallelizable:
public Flux<GroupedFlux<TopicPartition, ConsumerRecord<K, V>>> receiveAuto(Collection<String> topics) {
return KafkaReceiver.create(createReceiverOptions(topics))
.receiveAutoAck()
.flatMap(flux -> flux.groupBy(this::extractTopicPartition));
}
Downstream, users are able to parallelize each emitted GroupedFlux using whatever Scheduler they wish:
public <V> void work(Flux<GroupedFlux<TopicPartition, V>> flux) {
flux.doOnNext(groupPublisher -> groupPublisher
.publishOn(Schedulers.elastic())
.subscribe(this::doWork))
.subscribe();
}
This has the desired behavior processing each TopicPartition-GroupedFlux in-order and parallel to other GroupedFluxes.
I guess it executes sequentially at least in your consumer. To do a parallel consuming you should convert you flux to ParallelFlux
public ParallelFlux<ConsumerRecord<K, V>> receiveAuto(Collection<String> topics, Scheduler scheduler) {
return KafkaReceiver.create(ReceiverOptions.create(properties).subscription(topics))
.receiveAutoAck()
.flatMap(flux -> flux.groupBy(this::extractTopicPartition))
.flatMap(topicPartitionFlux -> topicPartitionFlux.parallel().runOn(Schedulers.parallel()));
}
After in your consumer function if you want to consume in parallel way you should use method such as:
void subscribe(Consumer<? super T> onNext, Consumer<? super Throwable>
onError, Runnable onComplete, Consumer<? super Subscription> onSubscribe)
Or any other overloaded method with Consumer<T super T> onNext arguments.
If you just use method as below you will consume flux in sequential way
void subscribe(Subscriber<? super T> s)

How to access memcached asynchronously in netty

I am writing a server in netty, in which I need to make a call to memcached. I am using spymemcached and can easily do the synchronous memcached call. I would like this memcached call to be async. Is that possible? The examples provided with netty do not seem to be helpful.
I tried using callbacks: created a ExecutorService pool in my Handler and submitted a callback worker to this pool. Like this:
public class MyHandler extends ChannelInboundMessageHandlerAdapter<MyPOJO> implements CallbackInterface{
...
private static ExecutorService pool = Executors.newFixedThreadPool(20);
#Override
public void messageReceived(ChannelHandlerContext ctx, MyPOJO pojo) {
...
CallingbackWorker worker = new CallingbackWorker(key, this);
pool.submit(worker);
...
}
public void myCallback() {
//get response
this.ctx.nextOutboundMessageBuf().add(response);
}
}
CallingbackWorker looks like:
public class CallingbackWorker implements Callable {
public CallingbackWorker(String key, CallbackInterface c) {
this.c = c;
this.key = key;
}
public Object call() {
//get value from key
c.myCallback(value);
}
However, when I do this, this.ctx.nextOutboundMessageBuf() in myCallback gets stuck.
So, overall, my question is: how to do async memcached calls in Netty?
There are two problems here: a small-ish issue with the way you're trying to code this, and a bigger one with many libraries that provide async service calls, but no good way to take full advantage of them in an async framework like Netty. That forces users into suboptimal hacks like this one, or a less-bad, but still not ideal approach I'll get to in a moment.
First the coding problem. The issue is that you're trying to call a ChannelHandlerContext method from a thread other than the one associated with your handler, which is not allowed. That's pretty easy to fix, as shown below. You could code it a few other ways, but this is probably the most straightforward:
private static ExecutorService pool = Executors.newFixedThreadPool(20);
public void channelRead(final ChannelHandlerContext ctx, final Object msg) {
//...
final GetFuture<String> future = memcachedClient().getAsync("foo", stringTranscoder());
// first wait for the response on a pool thread
pool.execute(new Runnable() {
public void run() {
String value;
Exception err;
try {
value = future.get(3, TimeUnit.SECONDS); // or whatever timeout you want
err = null;
} catch (Exception e) {
err = e;
value = null;
}
// put results into final variables; compiler won't let us do it directly above
final fValue = value;
final fErr = err;
// now process the result on the ChannelHandler's thread
ctx.executor().execute(new Runnable() {
public void run() {
handleResult(fValue, fErr);
}
});
}
});
// note that we drop through to here right after calling pool.execute() and
// return, freeing up the handler thread while we wait on the pool thread.
}
private void handleResult(String value, Exception err) {
// handle it
}
That will work, and might be sufficient for your application. But you've got a fixed-sized thread pool, so if you're ever going to handle much more than 20 concurrent connections, that will become a bottleneck. You could increase the pool size, or use an unbounded one, but at that point, you might as well be running under Tomcat, as memory consumption and context-switching overhead start to become issues, and you lose the scalabilty that was the attraction of Netty in the first place!
And the thing is, Spymemcached is NIO-based, event-driven, and uses just one thread for all its work, yet provides no way to fully take advantage of its event-driven nature. I expect they'll fix that before too long, just as Netty 4 and Cassandra have recently by providing callback (listener) methods on Future objects.
Meanwhile, being in the same boat as you, I researched the alternatives, and not being too happy with what I found, I wrote (yesterday) a Future tracker class that can poll up to thousands of Futures at a configurable rate, and call you back on the thread (Executor) of your choice when they complete. It uses just one thread to do this. I've put it up on GitHub if you'd like to try it out, but be warned that it's still wet, as they say. I've tested it a lot in the past day, and even with 10000 concurrent mock Future objects, polling once a millisecond, its CPU utilization is negligible, though it starts to go up beyond 10000. Using it, the example above looks like this:
// in some globally-accessible class:
public static final ForeignFutureTracker FFT = new ForeignFutureTracker(1, TimeUnit.MILLISECONDS);
// in a handler class:
public void channelRead(final ChannelHandlerContext ctx, final Object msg) {
// ...
final GetFuture<String> future = memcachedClient().getAsync("foo", stringTranscoder());
// add a listener for the Future, with a timeout in 2 seconds, and pass
// the Executor for the current context so the callback will run
// on the same thread.
Global.FFT.addListener(future, 2, TimeUnit.SECONDS, ctx.executor(),
new ForeignFutureListener<String,GetFuture<String>>() {
public void operationSuccess(String value) {
// do something ...
ctx.fireChannelRead(someval);
}
public void operationTimeout(GetFuture<String> f) {
// do something ...
}
public void operationFailure(Exception e) {
// do something ...
}
});
}
You don't want more than one or two FFT instances active at any time, or they could become a drain on CPU. But a single instance can handle thousands of outstanding Futures; about the only reason to have a second one would be to handle higher-latency calls, like S3, at a slower polling rate, say 10-20 milliseconds.
One drawback of the polling approach is that it adds a small amount of latency. For example, polling once a millisecond, on average it will add 500 microseconds to the response time. That won't be an issue for most applications, and I think is more than offset by the memory and CPU savings over the thread pool approach.
I expect within a year or so this will be a non-issue, as more async clients provide callback mechanisms, letting you fully leverage NIO and the event-driven model.