RxJava adjust backpressure avoiding observeOn buffer - reactive-programming

In the code below I would like the subscriber to control when the Flowable emits an event by holding a reference to the Subscription inside subscribe() and requesting the number of elements I want to be produced.
What I am experiencing is that observeOn()'s buffer with size 2 is hiding my call to subscription.request(3) as the producer is producing 2 elements at a time instead of 3.
public class FlowableExamples {
public static void main(String[] args) throws InterruptedException {
long start = new Date().getTime();
Flowable<Integer> flowable = Flowable
.generate(() -> 0, (Integer state, Emitter<Integer> emitter) -> {
int newValue = state + 1;
log("Producing: " + newValue);
emitter.onNext(newValue);
return newValue;
})
.take(30);
flowable
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.computation(), false, 2)
.subscribe(new Subscriber<Integer>() {
Subscription subscription;
#Override
public void onSubscribe(Subscription subscription) {
this.subscription = subscription;
subscription.request(5);
}
#Override
public void onNext(Integer integer) {
log("\t\treceived: " + integer);
if (integer >= 5) {
sleep(500);
log("Requesting 3 should produce 3, but actually produced 2");
subscription.request(3);
sleep(1000);
}
}
#Override
public void onError(Throwable throwable) {}
#Override
public void onComplete() {
log("Subscription Completed!!!!!!!!");
}
});
sleep(40_000);
System.out.println("Exit main after: " + (new Date().getTime() - start) + " ms");
}
private static void log(String msg) {
System.out.println(Thread.currentThread().getName() + ": " + msg);
}
private static void sleep(long ms) {
try {
Thread.sleep(ms);
} catch (InterruptedException e) {}
}
}
How could I accomplish this?

Related

RXJava Observable emit items when subscribe with Observer as anonymous type ONLY

When I create a new Observer as anonymous type It works Fine:
Observable<List<Post>> postsListObservable = mApplicationAPI.getPosts();
postsListObservable.
subscribeOn(Schedulers.io()).
observeOn(AndroidSchedulers.mainThread()).subscribe( new Observer<List<Post>>() {
#Override
public void onSubscribe(Disposable d) {
Log.i("ZOKa", "onSubscribe: ");
}
#Override
public void onNext(List<Post> posts) {
Log.i("ZOKa", "onNext: " + posts.size());
}
#Override
public void onError(Throwable e) {
Log.i("ZOKa", "onError: " + e.getMessage());
}
#Override
public void onComplete() {
Log.i("ZOKa", "onComplete: ");
}
});
When I create the Observer as a Dynamic Type it doesn't emit data
Observable<List<Post>> postsListObservable = mApplicationAPI.getPosts();
postsListObservable.
subscribeOn(Schedulers.io()).
observeOn(AndroidSchedulers.mainThread());
Observer<List<Post>> observer = new Observer<List<Post>>() {
#Override
public void onSubscribe(Disposable d) {
Log.i("ZOKa", "onSubscribe: ");
}
#Override
public void onNext(List<Post> posts) {
Log.i("ZOKa", "onNext: " + posts.size());
}
#Override
public void onError(Throwable e) {
Log.i("ZOKa", "onError: " + e.getMessage());
}
#Override
public void onComplete() {
Log.i("ZOKa", "onComplete: ");
}
};
postsListObservable.subscribe(observer);
Logcat for the first code snippet:
com.tripleService.basesetupfordi/I/ZOKa: onSubscribe:
com.tripleService.basesetupfordi/I/ZOKa: onNext: 100:
com.tripleService.basesetupfordi/I/ZOKa: onComplete:
Logcat for the second one:
com.tripleService.basesetupfordi/I/ZOKa: onError: null
So, What is the diff in between?
That's because Operators return new observables, but they don't modify the observable that they were called on. subscribeOn and observeOn in the second example has no impact on the postsListObservable and the observer.
Following should work:
Observable<List<Post>> postsListObservable = mApplicationAPI.getPosts();
Observable<List<Post>> postsListObservable2 = postsListObservable.
subscribeOn(Schedulers.io()).
observeOn(AndroidSchedulers.mainThread());
Observer<List<Post>> observer = new Observer<List<Post>>() {
...
};
postsListObservable2.subscribe(observer);
or
Observable<List<Post>> postsListObservable = mApplicationAPI.getPosts();
Observer<List<Post>> observer = new Observer<List<Post>>() {
...
};
postsListObservable.
subscribeOn(Schedulers.io()).
observeOn(AndroidSchedulers.mainThread()).subscribe(observer);

RxJava count() when one of two observable is not compled

I want to count items from stream Observable when trigger is not completed. I want to update View by size of stream when trigger occurs. As long as trigger not completed Consumer not invoking accept(). How I can resolve it?
Observable<Long> trigger = Observable.interval(2000L, TimeUnit.MILLISECONDS);
Observable<Long> stream = trigger
.flatMap(new Function<Long, ObservableSource<?>>() {
#Override
public ObservableSource<?> apply(Long aLong) throws Exception {
return Observable.just("A", "B", "C"); //completed observable
}
})
.count()
.toObservable();
stream.subscribe(new Consumer<Long>() {
#Override
public void accept(Long size) throws Exception {
Log.e("Elements: ", size.toString());
}
});
Do a rolling count with scan:
Observable<Long> trigger = Observable.interval(2000L, TimeUnit.MILLISECONDS);
Observable<Long> stream = trigger
.flatMap(new Function<Long, ObservableSource<String>>() {
#Override
public ObservableSource<String> apply(Long aLong) throws Exception {
return Observable.just("A", "B", "C"); //completed observable
}
})
.scan(0L, new BiFunction<Long, String, Long>() {
#Override public Long apply(Long a, String b) {
return a + 1;
}
})
;
stream.subscribe(new Consumer<Long>() {
#Override
public void accept(Long size) throws Exception {
Log.e("Elements: ", size.toString());
}
});

Rx Java 2 pre-pull next item on separate thread

Scenario: I have a stream of data I am reading from the database. What I would like to do is read a chunk of data, process it and stream it using rx-java 2. But while I am processing and streaming it I would like to load the next chunk of data on a separate thread (pre-pull the next chunk).
I have tried:
Flowable.generate(...)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.computation())
.map(...)
.subscribe(...)
Unfortunately this causes the generate method to continually run on an io thread. I just want one pre-pull. I have tried using buffer, but that really just ends up creating lists of chunks.
So basically while I am streaming the current chunk on a separate thread I want to read the next chunk and have it ready.
Not sure if this is possible. I need to use generate because there is no concept of when the data will end.
I have tried using subscribe(new FlowableSubscriber(){...}) using Subscription::request but that did not seem to work.
There are no standard operators in RxJava that would have this type of request-response pattern. You'd need a custom observeOn that requests before it sends the current item to its downstream.
import java.util.concurrent.atomic.*;
import org.junit.Test;
import org.reactivestreams.*;
import io.reactivex.*;
import io.reactivex.Scheduler.Worker;
import io.reactivex.internal.util.BackpressureHelper;
import io.reactivex.schedulers.Schedulers;
public class LockstepObserveOnTest {
#Test
public void test() {
Flowable.generate(() -> 0, (s, e) -> {
System.out.println("Generating " + s);
Thread.sleep(500);
e.onNext(s);
return s + 1;
})
.subscribeOn(Schedulers.io())
.compose(new LockstepObserveOn<>(Schedulers.computation()))
.map(v -> {
Thread.sleep(250);
System.out.println("Processing " + v);
Thread.sleep(250);
return v;
})
.take(50)
.blockingSubscribe();
}
static final class LockstepObserveOn<T> extends Flowable<T>
implements FlowableTransformer<T, T> {
final Flowable<T> source;
final Scheduler scheduler;
LockstepObserveOn(Scheduler scheduler) {
this(null, scheduler);
}
LockstepObserveOn(Flowable<T> source, Scheduler scheduler) {
this.source = source;
this.scheduler = scheduler;
}
#Override
protected void subscribeActual(Subscriber<? super T> subscriber) {
source.subscribe(new LockstepObserveOnSubscriber<>(
subscriber, scheduler.createWorker()));
}
#Override
public Publisher<T> apply(Flowable<T> upstream) {
return new LockstepObserveOn<>(upstream, scheduler);
}
static final class LockstepObserveOnSubscriber<T>
implements FlowableSubscriber<T>, Subscription, Runnable {
final Subscriber<? super T> actual;
final Worker worker;
final AtomicReference<T> item;
final AtomicLong requested;
final AtomicInteger wip;
Subscription upstream;
volatile boolean cancelled;
volatile boolean done;
Throwable error;
long emitted;
LockstepObserveOnSubscriber(Subscriber<? super T> actual, Worker worker) {
this.actual = actual;
this.worker = worker;
this.item = new AtomicReference<>();
this.requested = new AtomicLong();
this.wip = new AtomicInteger();
}
#Override
public void onSubscribe(Subscription s) {
upstream = s;
actual.onSubscribe(this);
s.request(1);
}
#Override
public void onNext(T t) {
item.lazySet(t);
schedule();
}
#Override
public void onError(Throwable t) {
error = t;
done = true;
schedule();
}
#Override
public void onComplete() {
done = true;
schedule();
}
#Override
public void request(long n) {
BackpressureHelper.add(requested, n);
schedule();
}
#Override
public void cancel() {
cancelled = true;
upstream.cancel();
worker.dispose();
if (wip.getAndIncrement() == 0) {
item.lazySet(null);
}
}
void schedule() {
if (wip.getAndIncrement() == 0) {
worker.schedule(this);
}
}
#Override
public void run() {
int missed = 1;
long e = emitted;
for (;;) {
long r = requested.get();
while (e != r) {
if (cancelled) {
item.lazySet(null);
return;
}
boolean d = done;
T v = item.get();
boolean empty = v == null;
if (d && empty) {
Throwable ex = error;
if (ex == null) {
actual.onComplete();
} else {
actual.onError(ex);
}
worker.dispose();
return;
}
if (empty) {
break;
}
item.lazySet(null);
upstream.request(1);
actual.onNext(v);
e++;
}
if (e == r) {
if (cancelled) {
item.lazySet(null);
return;
}
if (done && item.get() == null) {
Throwable ex = error;
if (ex == null) {
actual.onComplete();
} else {
actual.onError(ex);
}
worker.dispose();
return;
}
}
emitted = e;
missed = wip.addAndGet(-missed);
if (missed == 0) {
break;
}
}
}
}
}
}

how to watch the cilent when it lose Leadership through zookeeper curator?

as we know,when the client get the leadership,will invode the takeLeadership,but the document do not tell me when the client lost the leadership!!!so,how to watch the cilent when it lose Leadership through zookeeper curator?
question two : why my client was lose,i am can not debug the stateChanged(...) thought idea?
here my code, expect your great answer,thx
public class ExampleClient extends LeaderSelectorListenerAdapter implements Closeable{
private final String name;
private final LeaderSelector leaderSelector;
private final AtomicInteger leaderCount = new AtomicInteger();//用于记录领导次数
public ExampleClient(CuratorFramework client,String path,String name) {
this.name = name;
leaderSelector = new LeaderSelector(client, path, this);
leaderSelector.autoRequeue();//保留重新获取领导权资格
}
public void start() throws IOException {
leaderSelector.start();
}
#Override
public void close() throws IOException {
leaderSelector.close();
}
#Override
public void stateChanged(CuratorFramework client, ConnectionState newState)
{
if ((newState == ConnectionState.SUSPENDED) || (newState == ConnectionState.LOST) ) {
log.info("stateChanged !!!");
throw new CancelLeadershipException();
}
}
/**
* will be invoded when get leadeship
* #param client
* #throws Exception
*/
#Override
public void takeLeadership(CuratorFramework client) throws Exception {
final int waitSeconds =(int)(Math.random()*5)+1;
log.info(name + " is the leader now,wait " + waitSeconds + " seconds!");
log.info(name + " had been leader for " + leaderCount.getAndIncrement() + " time(s) before");
try {
/**/
Thread.sleep(TimeUnit.SECONDS.toMillis(waitSeconds));
//do something!!!
/*while(true){
//guarantee this client be the leader all the time!
}*/
}catch (InterruptedException e){
log.info(name+" was interrupted!");
Thread.currentThread().interrupt();
}finally{
log.info(name+" relinquishing leadership.\n");
}
}
}
LeaderLatchListener has two call backs about isLeader and notLeader. Some examples,
http://www.programcreek.com/java-api-examples/index.php?api=org.apache.curator.framework.recipes.leader.LeaderLatchListener

Flink Kafka Consumer throws Null Pointer Exception when using DataStream key by

I am using this example Flink CEP where I am separating out the data as I have created one application which is Sending application to Kafka & another application reading from Kafka... I generated the producer for class TemperatureWarning i.e. in Kafka,I was sending data related to TemperatureWarning Following is my code which is consuming data from Kafka...
StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.enableCheckpointing(5000);
Properties properties=new Properties();
properties.setProperty("bootstrap.servers", "PUBLICDNS:9092");
properties.setProperty("zookeeper.connect", "PUBLICDNS:2181");
properties.setProperty("group.id", "test");
DataStream<TemperatureWarning> dstream=env.addSource(new FlinkKafkaConsumer09<TemperatureWarning>("MonitoringEvent", new MonitoringEventSchema(), properties));
Pattern<TemperatureWarning, ?> alertPattern = Pattern.<TemperatureWarning>begin("first")
.next("second")
.within(Time.seconds(20));
PatternStream<TemperatureWarning> alertPatternStream = CEP.pattern(
dstream.keyBy("rackID"),
alertPattern);
DataStream<TemperatureAlert> alerts = alertPatternStream.flatSelect(
(Map<String, TemperatureWarning> pattern, Collector<TemperatureAlert> out) -> {
TemperatureWarning first = pattern.get("first");
TemperatureWarning second = pattern.get("second");
if (first.getAverageTemperature() < second.getAverageTemperature()) {
out.collect(new TemperatureAlert(second.getRackID(),second.getAverageTemperature(),second.getTimeStamp()));
}
});
dstream.print();
alerts.print();
env.execute("Flink Kafka Consumer");
But when I execute this application,it throws following Exception:
Exception in thread "main" java.lang.NullPointerException
at org.apache.flink.api.common.operators.Keys$ExpressionKeys.<init>(Keys.java:329)
at org.apache.flink.streaming.api.datastream.DataStream.keyBy(DataStream.java:274)
at com.yash.consumer.KafkaFlinkConsumer.main(KafkaFlinkConsumer.java:49)
Following is my class TemperatureWarning :
public class TemperatureWarning {
private int rackID;
private double averageTemperature;
private long timeStamp;
public TemperatureWarning(int rackID, double averageTemperature,long timeStamp) {
this.rackID = rackID;
this.averageTemperature = averageTemperature;
this.timeStamp=timeStamp;
}
public TemperatureWarning() {
this(-1, -1,-1);
}
public int getRackID() {
return rackID;
}
public void setRackID(int rackID) {
this.rackID = rackID;
}
public double getAverageTemperature() {
return averageTemperature;
}
public void setAverageTemperature(double averageTemperature) {
this.averageTemperature = averageTemperature;
}
public long getTimeStamp() {
return timeStamp;
}
public void setTimeStamp(long timeStamp) {
this.timeStamp = timeStamp;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof TemperatureWarning) {
TemperatureWarning other = (TemperatureWarning) obj;
return rackID == other.rackID && averageTemperature == other.averageTemperature;
} else {
return false;
}
}
#Override
public int hashCode() {
return 41 * rackID + Double.hashCode(averageTemperature);
}
#Override
public String toString() {
//return "TemperatureWarning(" + getRackID() + ", " + averageTemperature + ")";
return "TemperatureWarning(" + getRackID() +","+averageTemperature + ") "+ "," + getTimeStamp();
}
}
Following is my class MonitoringEventSchema :
public class MonitoringEventSchema implements DeserializationSchema<TemperatureWarning>,SerializationSchema<TemperatureWarning>
{
#Override
public TypeInformation<TemperatureWarning> getProducedType() {
// TODO Auto-generated method stub
return null;
}
#Override
public byte[] serialize(TemperatureWarning element) {
// TODO Auto-generated method stub
return element.toString().getBytes();
}
#Override
public TemperatureWarning deserialize(byte[] message) throws IOException {
// TODO Auto-generated method stub
if(message!=null)
{
String str=new String(message,"UTF-8");
String []val=str.split(",");
TemperatureWarning warning=new TemperatureWarning(Integer.parseInt(val[0]),Double.parseDouble(val[1]),Long.parseLong(val[2]));
return warning;
}
return null;
}
#Override
public boolean isEndOfStream(TemperatureWarning nextElement) {
// TODO Auto-generated method stub
return false;
}
}
Now what is required to do keyBy operation as I have mentioned the key which is required for stream to partition ?? What needs to be done here to solve this error ??
The problem is in this function:
#Override
public TypeInformation<TemperatureWarning> getProducedType() {
// TODO Auto-generated method stub
return null;
}
you cannot return null here.