I tried below to test the sereialize().
I called onNext 1,000,000 times to count from 2 different threads.
Then, I expected to get 2,000,000 at onComplete.
However, I couldn't get the expected value.
private static int count = 0;
private static void setCount(int value) {
count = value;
}
private static final int TEST_LOOP = 10;
private static final int NEXT_LOOP = 1_000_000;
#Test
public void test() throws Exception {
for (int test = 0; test < TEST_LOOP; test++) {
Flowable.create(emitter -> {
ExecutorService service = Executors.newCachedThreadPool();
emitter.setCancellable(() -> service.shutdown());
Future<Boolean> future1 = service.submit(() -> {
for (int i = 0; i < NEXT_LOOP; i++) {
emitter.onNext(i);
}
return true;
});
Future<Boolean> future2 = service.submit(() -> {
for (int i = 0; i < NEXT_LOOP; i++) {
emitter.onNext(i);
}
return true;
});
if (future1.get(1, TimeUnit.SECONDS)
&& future2.get(1, TimeUnit.SECONDS)) {
emitter.onComplete();
}
}, BackpressureStrategy.BUFFER)
.serialize()
.cast(Integer.class)
.subscribe(new Subscriber<Integer>() {
private int count = 0;
#Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE);
}
#Override
public void onNext(Integer t) {
count++;
}
#Override
public void onError(Throwable t) {
fail(t.getMessage());
}
#Override
public void onComplete() {
setCount(count);
}
});
assertThat(count, is(NEXT_LOOP * 2));
}
}
I wonder whether serialize() doesn't work or I missunderstood the usage of serialize()
I checked the source of SerializedSubscriber.
#Override
public void onNext(T t) {
...
synchronized(this){
...
}
actual.onNext(t);
emitLoop();
}
Since actual.onNext(t); is called out of synchronized block, I guess that actual.onNext(t); could be called from different threads at the same time. Also, it may be possible to call onComplete before onNext would be done, I guess.
I used RxJava 2.0.4.
This is not a bug but a misuse of the FlowableEmitter:
The onNext, onError and onComplete methods should be called in a sequential manner, just like the Subscriber's methods. Use serialize() if you want to ensure this. The other methods are thread-safe.
FlowableEmitter.serialize()
Applying Flowable.serialize() is too late for the create operator.
Related
AsyncExecute method causing lag in my treeview application when I am expanding a branch.
Important parts of my TreeView
public DirectoryItemViewModel(string fullPath, DirectoryItemType type, long size)
{
this.ExpandCommand = new AsyncCommand(Expand, CanExecute);
this.FullPath = fullPath;
this.Type = type;
this.Size = size;
this.ClearChildren();
}
public bool CanExecute()
{
return !isBusy;
}
public IAsyncCommand ExpandCommand { get; set; }
private async Task Expand()
{
isBusy = true;
if (this.Type == DirectoryItemType.File)
{
return;
}
List<Task<long>> tasks = new();
var children = DirectoryStructure.GetDirectoryContents(this.FullPath);
this.Children = new ObservableCollection<DirectoryItemViewModel>(
children.Select(content => new DirectoryItemViewModel(content.FullPath, content.Type, 0)));
//If I delete the remaining part of code in this method everything works fine,
in my idea it should output the folders without lag, and then start calculating their size in other threads, but it first lags for 1-2 sec, then output the content of the folder, and then start calculating.
foreach (var item in children)
{
if (item.Type == DirectoryItemType.Folder)
{
tasks.Add(Task.Run(() => GetDirectorySize(new DirectoryInfo(item.FullPath))));
}
}
var results = await Task.WhenAll(tasks);
for (int i = 0; i < results.Length; i++)
{
Children[i].Size = results[i];
}
isBusy = false;
}
My command Interface and class
public interface IAsyncCommand : ICommand
{
Task ExecuteAsync();
bool CanExecute();
}
public class AsyncCommand : IAsyncCommand
{
public event EventHandler CanExecuteChanged;
private bool _isExecuting;
private readonly Func<Task> _execute;
private readonly Func<bool> _canExecute;
public AsyncCommand(
Func<Task> execute,
Func<bool> canExecute = null)
{
_execute = execute;
_canExecute = canExecute;
}
public bool CanExecute()
{
return !_isExecuting && (_canExecute?.Invoke() ?? true);
}
public async Task ExecuteAsync()
{
if (CanExecute())
{
try
{
_isExecuting = true;
await _execute();
}
finally
{
_isExecuting = false;
}
}
RaiseCanExecuteChanged();
}
public void RaiseCanExecuteChanged()
{
CanExecuteChanged?.Invoke(this, EventArgs.Empty);
}
bool ICommand.CanExecute(object parameter)
{
return CanExecute();
}
void ICommand.Execute(object parameter)
{
//I suppose that here is the problem cause IDE is hinting me that I am not awaiting here, but I don't know how to change it if it is.
ExecuteAsync();
}
}
I want to send 100 messages/second from my stream to a kafka topic. I have more than enough data in stream to do so.
So far, I have found windowing concept, but I am unable to modify it to my use case.
You could do this easily with a ProcessFunction. You would keep a counter in Flink state, and only emit elements when the counter is less than 100. Meanwhile, use a timer to reset the counter to zero once a second.
Flink v1.15, I created function.
Refer to checkpointing_under_backpressure
and process_function.
public class RateLimitFunction extends KeyedProcessFunction<String, String, String> {
private transient ValueState<Long> counter;
private transient ValueState<Long> lastTimestamp;
private final Long count;
private final Long millisecond;
public RateLimitFunction(Long count, Long millisecond) {
this.count = count;
this.millisecond = millisecond;
}
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
counter = getRuntimeContext()
.getState(new ValueStateDescriptor<>("counter", TypeInformation.of(Long.class)));
lastTimestamp = getRuntimeContext()
.getState(new ValueStateDescriptor<>("last-timestamp", TypeInformation.of(Long.class)));
}
#Override
public void processElement(String value, KeyedProcessFunction<String, String, String>.Context ctx,
Collector<String> out) throws Exception {
ctx.timerService().registerProcessingTimeTimer(ctx.timerService().currentProcessingTime());
long current = counter.value() == null ? 0L : counter.value();
if (current < count) {
counter.update(current + 1L);
out.collect(value);
} else {
if (lastTimestamp.value() == null) {
lastTimestamp.update(ctx.timerService().currentProcessingTime());
}
Thread.sleep(millisecond);
out.collect(value);
}
}
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<String> out) throws Exception {
if (lastTimestamp.value() != null && lastTimestamp.value() + millisecond <= timestamp) {
counter.update(0L);
lastTimestamp.update(null);
}
}
}
Project Reactor has this factory method for creating a push/pull Producer<T>.
http://projectreactor.io/docs/core/release/reference/#_hybrid_push_pull_model
Is there any such thing in RxJava-2?
If not, what would be the recommended way (without actually implemementing reactive specs interfaces from scratch) to create such beast that can handle the push/pull model?
EDIT: as requested I am giving an example of the API I am trying to use...
private static class API
{
CompletableFuture<Void> getT(Consumer<Object> consumer) {}
}
private static class Callback implements Consumer<Object>
{
private API api;
public Callback(API api) { this api = api; }
#Override
public void accept(Object o)
{
//do stuff with o
//...
//request for another o
api.getT(this);
}
}
public void example()
{
API api = new API();
api.getT(new Callback(api)).join();
}
So it's call back based, which will get one item and from within you can request for another one. the completable future flags no more items.
Here is an example of a custom Flowable that turns this particular API into an RxJava source. Note however that in general, the API peculiarities in general may not be possible to capture with a single reactive bridge design:
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.atomic.*;
import java.util.function.*;
import org.reactivestreams.*;
import io.reactivex.Flowable;
import io.reactivex.internal.subscriptions.EmptySubscription;
import io.reactivex.internal.util.BackpressureHelper;
public final class SomeAsyncApiBridge<T> extends Flowable<T> {
final Function<? super Consumer<? super T>,
? extends CompletableFuture<Void>> apiInvoker;
final AtomicBoolean once;
public SomeAsyncApiBridge(Function<? super Consumer<? super T>,
? extends CompletableFuture<Void>> apiInvoker) {
this.apiInvoker = apiInvoker;
this.once = new AtomicBoolean();
}
#Override
protected void subscribeActual(Subscriber<? super T> s) {
if (once.compareAndSet(false, true)) {
SomeAsyncApiBridgeSubscription<T> parent =
new SomeAsyncApiBridgeSubscription<>(s, apiInvoker);
s.onSubscribe(parent);
parent.moveNext();
} else {
EmptySubscription.error(new IllegalStateException(
"Only one Subscriber allowed"), s);
}
}
static final class SomeAsyncApiBridgeSubscription<T>
extends AtomicInteger
implements Subscription, Consumer<T>, BiConsumer<Void, Throwable> {
/** */
private static final long serialVersionUID = 1270592169808316333L;
final Subscriber<? super T> downstream;
final Function<? super Consumer<? super T>,
? extends CompletableFuture<Void>> apiInvoker;
final AtomicInteger wip;
final AtomicLong requested;
final AtomicReference<CompletableFuture<Void>> task;
static final CompletableFuture<Void> TASK_CANCELLED =
CompletableFuture.completedFuture(null);
volatile T item;
volatile boolean done;
Throwable error;
volatile boolean cancelled;
long emitted;
SomeAsyncApiBridgeSubscription(
Subscriber<? super T> downstream,
Function<? super Consumer<? super T>,
? extends CompletableFuture<Void>> apiInvoker) {
this.downstream = downstream;
this.apiInvoker = apiInvoker;
this.requested = new AtomicLong();
this.wip = new AtomicInteger();
this.task = new AtomicReference<>();
}
#Override
public void request(long n) {
BackpressureHelper.add(requested, n);
drain();
}
#Override
public void cancel() {
cancelled = true;
CompletableFuture<Void> curr = task.getAndSet(TASK_CANCELLED);
if (curr != null && curr != TASK_CANCELLED) {
curr.cancel(true);
}
if (getAndIncrement() == 0) {
item = null;
}
}
void moveNext() {
if (wip.getAndIncrement() == 0) {
do {
CompletableFuture<Void> curr = task.get();
if (curr == TASK_CANCELLED) {
return;
}
CompletableFuture<Void> f = apiInvoker.apply(this);
if (task.compareAndSet(curr, f)) {
f.whenComplete(this);
} else {
curr = task.get();
if (curr == TASK_CANCELLED) {
f.cancel(true);
return;
}
}
} while (wip.decrementAndGet() != 0);
}
}
#Override
public void accept(Void t, Throwable u) {
if (u != null) {
error = u;
task.lazySet(TASK_CANCELLED);
}
done = true;
drain();
}
#Override
public void accept(T t) {
item = t;
drain();
}
void drain() {
if (getAndIncrement() != 0) {
return;
}
int missed = 1;
long e = emitted;
for (;;) {
for (;;) {
if (cancelled) {
item = null;
return;
}
boolean d = done;
T v = item;
boolean empty = v == null;
if (d && empty) {
Throwable ex = error;
if (ex == null) {
downstream.onComplete();
} else {
downstream.onError(ex);
}
return;
}
if (empty || e == requested.get()) {
break;
}
item = null;
downstream.onNext(v);
e++;
moveNext();
}
emitted = e;
missed = addAndGet(-missed);
if (missed == 0) {
break;
}
}
}
}
}
Test and example source:
import java.util.concurrent.*;
import java.util.function.Consumer;
import org.junit.Test;
public class SomeAsyncApiBridgeTest {
static final class AsyncRange {
final int max;
int index;
public AsyncRange(int start, int count) {
this.index = start;
this.max = start + count;
}
public CompletableFuture<Void> next(Consumer<? super Integer> consumer) {
int i = index;
if (i == max) {
return CompletableFuture.completedFuture(null);
}
index = i + 1;
CompletableFuture<Void> cf = CompletableFuture
.runAsync(() -> consumer.accept(i));
CompletableFuture<Void> cancel = new CompletableFuture<Void>() {
#Override
public boolean cancel(boolean mayInterruptIfRunning) {
cf.cancel(mayInterruptIfRunning);
return super.cancel(mayInterruptIfRunning);
}
};
return cancel;
}
}
#Test
public void simple() {
AsyncRange r = new AsyncRange(1, 10);
new SomeAsyncApiBridge<Integer>(
consumer -> r.next(consumer)
)
.test()
.awaitDone(500, TimeUnit.SECONDS)
.assertResult(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
}
}
This is something that looks that is working using Reactor's Flux.create(). I changed the API a bit.
public class FlowableGenerate4
{
private static class API
{
private ExecutorService es = Executors.newFixedThreadPool(1);
private CompletableFuture<Void> done = new CompletableFuture<>();
private AtomicInteger stopCounter = new AtomicInteger(10);
public boolean isDone()
{
return done.isDone();
}
public CompletableFuture<Void> getT(Consumer<Object> consumer)
{
es.submit(() -> {
try {
Thread.sleep(100);
} catch (Exception e) {
}
if (stopCounter.decrementAndGet() < 0)
done.complete(null);
else
consumer.accept(new Object());
});
return done;
}
}
private static class Callback implements Consumer<Object>
{
private API api;
private FluxSink<Object> sink;
public Callback(API api, FluxSink<Object> sink)
{
this.api = api;
this.sink = sink;
}
#Override
public void accept(Object o)
{
sink.next(o);
if (sink.requestedFromDownstream() > 0 && !api.isDone())
api.getT(this);
else
sink.currentContext().<AtomicBoolean>get("inProgress")
.set(false);
}
}
private Publisher<Object> reactorPublisher()
{
API api = new API();
return
Flux.create(sink -> {
sink.onRequest(n -> {
//if it's in progress already, do nothing
//I understand that onRequest() can be called asynchronously
//regardless if the previous call demand has been satisfied or not
if (!sink.currentContext().<AtomicBoolean>get("inProgress")
.compareAndSet(false, true))
return;
//else kick off calls to API
api.getT(new Callback(api, sink))
.whenComplete((o, t) -> {
if (t != null)
sink.error(t);
else
sink.complete();
});
});
}).subscriberContext(
Context.empty().put("inProgress", new AtomicBoolean(false)));
}
#Test
public void test()
{
Flowable.fromPublisher(reactorPublisher())
.skip(5)
.take(10)
.blockingSubscribe(
i -> System.out.println("onNext()"),
Throwable::printStackTrace,
() -> System.out.println("onComplete()")
);
}
}
Scenario: I have a stream of data I am reading from the database. What I would like to do is read a chunk of data, process it and stream it using rx-java 2. But while I am processing and streaming it I would like to load the next chunk of data on a separate thread (pre-pull the next chunk).
I have tried:
Flowable.generate(...)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.computation())
.map(...)
.subscribe(...)
Unfortunately this causes the generate method to continually run on an io thread. I just want one pre-pull. I have tried using buffer, but that really just ends up creating lists of chunks.
So basically while I am streaming the current chunk on a separate thread I want to read the next chunk and have it ready.
Not sure if this is possible. I need to use generate because there is no concept of when the data will end.
I have tried using subscribe(new FlowableSubscriber(){...}) using Subscription::request but that did not seem to work.
There are no standard operators in RxJava that would have this type of request-response pattern. You'd need a custom observeOn that requests before it sends the current item to its downstream.
import java.util.concurrent.atomic.*;
import org.junit.Test;
import org.reactivestreams.*;
import io.reactivex.*;
import io.reactivex.Scheduler.Worker;
import io.reactivex.internal.util.BackpressureHelper;
import io.reactivex.schedulers.Schedulers;
public class LockstepObserveOnTest {
#Test
public void test() {
Flowable.generate(() -> 0, (s, e) -> {
System.out.println("Generating " + s);
Thread.sleep(500);
e.onNext(s);
return s + 1;
})
.subscribeOn(Schedulers.io())
.compose(new LockstepObserveOn<>(Schedulers.computation()))
.map(v -> {
Thread.sleep(250);
System.out.println("Processing " + v);
Thread.sleep(250);
return v;
})
.take(50)
.blockingSubscribe();
}
static final class LockstepObserveOn<T> extends Flowable<T>
implements FlowableTransformer<T, T> {
final Flowable<T> source;
final Scheduler scheduler;
LockstepObserveOn(Scheduler scheduler) {
this(null, scheduler);
}
LockstepObserveOn(Flowable<T> source, Scheduler scheduler) {
this.source = source;
this.scheduler = scheduler;
}
#Override
protected void subscribeActual(Subscriber<? super T> subscriber) {
source.subscribe(new LockstepObserveOnSubscriber<>(
subscriber, scheduler.createWorker()));
}
#Override
public Publisher<T> apply(Flowable<T> upstream) {
return new LockstepObserveOn<>(upstream, scheduler);
}
static final class LockstepObserveOnSubscriber<T>
implements FlowableSubscriber<T>, Subscription, Runnable {
final Subscriber<? super T> actual;
final Worker worker;
final AtomicReference<T> item;
final AtomicLong requested;
final AtomicInteger wip;
Subscription upstream;
volatile boolean cancelled;
volatile boolean done;
Throwable error;
long emitted;
LockstepObserveOnSubscriber(Subscriber<? super T> actual, Worker worker) {
this.actual = actual;
this.worker = worker;
this.item = new AtomicReference<>();
this.requested = new AtomicLong();
this.wip = new AtomicInteger();
}
#Override
public void onSubscribe(Subscription s) {
upstream = s;
actual.onSubscribe(this);
s.request(1);
}
#Override
public void onNext(T t) {
item.lazySet(t);
schedule();
}
#Override
public void onError(Throwable t) {
error = t;
done = true;
schedule();
}
#Override
public void onComplete() {
done = true;
schedule();
}
#Override
public void request(long n) {
BackpressureHelper.add(requested, n);
schedule();
}
#Override
public void cancel() {
cancelled = true;
upstream.cancel();
worker.dispose();
if (wip.getAndIncrement() == 0) {
item.lazySet(null);
}
}
void schedule() {
if (wip.getAndIncrement() == 0) {
worker.schedule(this);
}
}
#Override
public void run() {
int missed = 1;
long e = emitted;
for (;;) {
long r = requested.get();
while (e != r) {
if (cancelled) {
item.lazySet(null);
return;
}
boolean d = done;
T v = item.get();
boolean empty = v == null;
if (d && empty) {
Throwable ex = error;
if (ex == null) {
actual.onComplete();
} else {
actual.onError(ex);
}
worker.dispose();
return;
}
if (empty) {
break;
}
item.lazySet(null);
upstream.request(1);
actual.onNext(v);
e++;
}
if (e == r) {
if (cancelled) {
item.lazySet(null);
return;
}
if (done && item.get() == null) {
Throwable ex = error;
if (ex == null) {
actual.onComplete();
} else {
actual.onError(ex);
}
worker.dispose();
return;
}
}
emitted = e;
missed = wip.addAndGet(-missed);
if (missed == 0) {
break;
}
}
}
}
}
}
I have simple counter actor implemented in java:
public class CounterJavaActor extends UntypedActor {
int count = 0;
#Override
public void onReceive(Object message) throws Exception {
if (message.equals("incr")) {
count += 1;
} else if (message.equals("get")) {
sender().tell(count, self());
}
}
}
In courses on coursera "Functional reactive programming in scala", I saw functional impementation of counter:
/**
* Advantages:
* state change is explicit
* state is scoped to current behaviour
*/
class CounterScala extends Actor{
def counter(n: Int) : Receive = {
case "incr" => context.become(counter(n+1))
case "get" => sender ! n
}
def receive = counter(0)
}
Upd:
My problem, that in java i can't make recourse functional call like in scala counter(n+1). What it means:
public class CounterJava8Actor extends AbstractActor {
//counter(0) in scala
private PartialFunction<Object, BoxedUnit> counter;
private int n = 0;
public CounterJava8Actor() {
counter =
ReceiveBuilder.
matchEquals("get", s -> {
sender().tell(n, self());
}).
matchEquals("inc", s -> {
//become(counter(n+1) in scala
context().become(counter);
}).build();
receive(counter);
}
}
It is possible to implement it in functional style with java?
According to docs you can use become/unbecome in java 8
http://doc.akka.io/docs/akka/snapshot/java/lambda-actors.html#become-unbecome
here is the sample code copied from there
public class HotSwapActor extends AbstractActor {
private PartialFunction<Object, BoxedUnit> angry;
private PartialFunction<Object, BoxedUnit> happy;
public HotSwapActor() {
angry =
ReceiveBuilder.
matchEquals("foo", s -> {
sender().tell("I am already angry?", self());
}).
matchEquals("bar", s -> {
context().become(happy);
}).build();
happy = ReceiveBuilder.
matchEquals("bar", s -> {
sender().tell("I am already happy :-)", self());
}).
matchEquals("foo", s -> {
context().become(angry);
}).build();
receive(ReceiveBuilder.
matchEquals("foo", s -> {
context().become(angry);
}).
matchEquals("bar", s -> {
context().become(happy);
}).build()
);
}
}
Or you can use UntypedActor like explained in the docs here
http://doc.akka.io/docs/akka/snapshot/java/untyped-actors.html
public class Manager extends UntypedActor {
public static final String SHUTDOWN = "shutdown";
ActorRef worker = getContext().watch(getContext().actorOf(
Props.create(Cruncher.class), "worker"));
public void onReceive(Object message) {
if (message.equals("job")) {
worker.tell("crunch", getSelf());
} else if (message.equals(SHUTDOWN)) {
worker.tell(PoisonPill.getInstance(), getSelf());
getContext().become(shuttingDown);
}
}
Procedure<Object> shuttingDown = new Procedure<Object>() {
#Override
public void apply(Object message) {
if (message.equals("job")) {
getSender().tell("service unavailable, shutting down", getSelf());
} else if (message instanceof Terminated) {
getContext().stop(getSelf());
}
}
};
}
To know how to add parameter to Procedure you can see this answer:
Akka/Java getContext().become with parameter?
and here is actual solution with java 8
private PartialFunction<Object, BoxedUnit> counter(final int n) {
return ReceiveBuilder.
matchEquals("get", s -> {
sender().tell(n, self());
}).
matchEquals("inc", s -> {
context().become(counter(n + 1));
}).build();
}
public CounterJava8Actor() {
receive(counter(0));
}