I have two hot observables, which are respectively a stream Q of requests to a network server, and a stream R of replies from the server. The replies are always delivered in the order of requests, and every request is going to receive exactly one reply eventually. Thus the first event in R, R1, is the reply to the first event in Q, Q1, and so on. I need to detect when a reply Rn takes longer than a defined timeout and signal this timeout condition.
Q --1---2---------3-------> // Requests Q1, Q2...
R ----1-------------------> // Replies
Out ------------------O-|> // Oops: Reply R2 to Q2 did not arrive within time τ.
|<----τ---->|
Events Qn and Rn do not contain any identifying information (think of plain colorless round marbles), and the indices in the diagram are just sequential numbers introduced for explanation.
I seem unable to solve this riddle. I tried the approach below, but it appears I am matching the latest request Qi to the latest response Rj. In the sample Q contains 5 requests, spaced 500ms apart, and replies in R come 750ms apart, starting at 200ms, but only 4 of them (the 5th is delayed indefinitely). The code does not detect that, since that last reply R4 comes within the set timeout of 1000ms after the latest request Q5 (in 200ms, actually).
var Q = Observable.Interval(TimeSpan.FromMilliseconds(500)).Select(_ => Unit.Default)
.Take(5).Concat(Observable.Never<Unit>());
var R = Observable.Interval(TimeSpan.FromMilliseconds(750)).Select(_ => Unit.Default)
.Delay(TimeSpan.FromMilliseconds(200))
.Take(4).Concat(Observable.Never<Unit>());
var dq = Q.Select(v => Observable.Return(v).Delay(TimeSpan.FromMilliseconds(1000)));
var dr = Observable.Zip(Q, R, (_1,_2) => Observable.Never<Unit>());
Observable.Merge(dq, dr).Dump().Switch().Dump();
I believe that you want to be notified that request 4 has timed out (due at 3s, but arrives at 3.2s) and also request 5 as it never arrives
void Main()
{
var scheduler = new TestScheduler();
var requests = scheduler.CreateHotObservable<int>(
ReactiveTest.OnNext(0500.Ms(), 1),
ReactiveTest.OnNext(1000.Ms(), 2),
ReactiveTest.OnNext(1500.Ms(), 3),
ReactiveTest.OnNext(2000.Ms(), 4),
ReactiveTest.OnNext(2500.Ms(), 5));
var responses = scheduler.CreateHotObservable<Unit>(
ReactiveTest.OnNext(0950.Ms(), Unit.Default),
ReactiveTest.OnNext(1700.Ms(), Unit.Default),
ReactiveTest.OnNext(2450.Ms(), Unit.Default),
ReactiveTest.OnNext(3200.Ms(), Unit.Default));
var expected = scheduler.CreateHotObservable<int>(
ReactiveTest.OnNext(3000.Ms(), 4),
ReactiveTest.OnNext(3500.Ms(), 5)
);
var observer = scheduler.CreateObserver<int>();
var query = responses
.Select((val, idx)=>idx)
.Publish(responseIdxs =>
{
return requests.SelectMany((q, qIdx) =>
Observable.Timer(TimeSpan.FromSeconds(1), scheduler)
.TakeUntil(responseIdxs.Where(rIdx => qIdx == rIdx))
.Select(_ => q));
});
query.Subscribe(observer);
scheduler.Start();
//This test passes
ReactiveAssert.AreElementsEqual(
expected.Messages,
observer.Messages);
}
// Define other methods and classes here
public static class TickExtensions
{
public static long Ms(this int ms)
{
return TimeSpan.FromMilliseconds(ms).Ticks;
}
}
Related
Found the programming language "pony" just today... and started to play with it.
My code is supposed to do some simple producer consumer thing.
As claimed by language documentation, the language ensures there are no data races.
Here, main sends 10 messages to the producer, which in turn sends 10 messages to the consumer. The consumer increments a counter state variable. Then, main sends a message to the consumer, which in turn sends a message to main in order to display the current value. If all messages were in sequence, the expected value would be 9 (or 10). The result printed, though is 0.
As this is all in hour 1 of my playing with the language, of course I might have messed up something else.
Who can explain my mistake?
use "collections"
actor Consumer
var _received : I32
new create() =>
_received = 0
be tick() =>
_received = _received + 1
be query(main : Main) =>
main.status(_received)
actor Producer
var _consumer : Consumer
new create(consumer' : Consumer) =>
_consumer = consumer'
be produceOne () =>
_consumer.tick()
actor Main
var _env : Env
new create(env: Env) =>
_env = env
let c : Consumer = Consumer.create()
let p = Producer.create(c)
for i in Range[I32](0,10) do
p.produceOne()
end
c.query(this)
be status( count : I32) =>
// let fortyTwo : I32 = 42
// _env.out.print( "fortytwo? " + (fortyTwo.string()))
_env.out.print( "produced: " + (count.string()) )
Running on windows 10, 64 bit, btw. with the latest and greatest zip file installation I found.
0.10.0-1c33065 [release]
compiled with: llvm 3.9.0 -- ?
The data races prevented by Pony are the ones that occur at the memory level, when somebody reads from a memory location while somebody else is writing to it. This is prevented by forbidding shared mutable state with the type system.
However, your program can still have "logical" data races if a result depends on a message ordering that isn't guaranteed by Pony. Pony guarantees causal ordering of messages. This means that message sent or received is the cause of any future message that will be sent or received if the messages have the same destination, and of course causes must happen before their effects.
actor A
be ma(b: B, c: C) =>
b.mb()
c.mc(b)
actor B
be mb() =>
None
actor C
be mc(b: B) =>
b.mb()
In this example, B will always receive the message from A before the message from C because A sends a message to B before sending a message to C (note that the two messages can still be received in any order since they don't have the same destination). This means that the message sent to B by C is sent after the message sent to B by A and since both have the same destination, there is a causal relationship.
Let's look at the causal orderings in your program. With -> being "is the cause of", we have
Main.create -> Main.status (through Consumer.query)
Consumer.create -> Consumer.query
Consumer.create -> Consumer.tick (through Producer.produceOne)
Producer.create -> Producer.produceOne
As you can see, there is no causal relationship between Consumer.query and Consumer.tick. In the sense of the actual implementation, this means that Main can send the produceOne messages and then send the query message before any Producer starts executing the message it received and has a chance to send the tick message. If you run your program with one scheduler thread (--ponythreads=1 as a command line argument) it will always print produced: 0 because Main will monopolise the only scheduler until the end of create. With multiple scheduler threads, anything between 0 and 10 can happen because all schedulers could be busy executing other actors, or be available to start executing the Producers immediately.
In summary, your tick and query behaviours can be executed in any particular order. To fix the problem, you'd have to introduce causality between your messages, either by adding round-trip messages or by doing the accumulating and printing in the same actor.
Thanks to #Benoit Vey for the help with that.
It is indeed the case, that there is no express or implied casuality between the execution of the query and the time when the producer executes its tick () messaging to the consumer.
In that sense, no voodoo, no magic. It all just behaves as any actor system would be expected to behave.
Messages within an actor are processed in order (as it should be). Hence, in order to get the desired program behavior, the producer should trigger the query eventually (in order after processing the produceOne messages).
Here, how that can be accomplished:
use "collections"
actor Consumer
var _received : I32
new create() =>
_received = 0
be tick() =>
_received = _received + 1
be query(main : Main) =>
main.status(_received)
actor Producer
var _consumer : Consumer
new create(consumer' : Consumer) =>
_consumer = consumer'
be produceOne () =>
_consumer.tick()
be forward (main : Main) =>
main.doQuery(_consumer)
actor Main
var _env : Env
new create(env: Env) =>
_env = env
let c : Consumer = Consumer.create()
let p = Producer.create(c)
for i in Range[I32](0,10) do
p.produceOne()
end
//c.query(this)
p.forward(this)
be doQuery (target : Consumer) =>
target.query(this)
be status( count : I32) =>
// let fortyTwo : I32 = 42
// _env.out.print( "fortytwo? " + (fortyTwo.string()))
_env.out.print( "produced: " + (count.string()) )
Just for giggles (and comparison), I also implemented the same in F#. And to my surprise, pony wins in the category of compactness. 39 lines of code in Pony, 80 lines in F#. That, along with the native code generation makes Pony an interesting language choice indeed.
open FSharp.Control
type ConsumerMessage =
| Tick
| Query of MailboxProcessor<MainMessage>
and ProducerMessage =
| ProduceOne of MailboxProcessor<ConsumerMessage>
| Forward of (MailboxProcessor<MainMessage> * MainMessage)
and MainMessage =
| Status of int
| DoQuery of MailboxProcessor<ConsumerMessage>
let consumer =
new MailboxProcessor<ConsumerMessage>
(fun inbox ->
let rec loop count =
async {
let! m = inbox.Receive()
match m with
| Tick ->
return! loop (count+1)
| Query(target) ->
do target.Post(Status count)
return! loop count
}
loop 0
)
let producer =
new MailboxProcessor<ProducerMessage>
(fun inbox ->
let rec loop () =
async {
let! m = inbox.Receive()
match m with
| ProduceOne(consumer') ->
consumer'.Post(Tick)
return! loop ()
| Forward (target, msg) ->
target.Post(msg)
return! loop ()
}
loop ()
)
let main =
new MailboxProcessor<MainMessage>
(fun inbox ->
let rec loop () =
async {
let! m = inbox.Receive()
match m with
| Status(count) ->
printfn "Status: %d" count
return! loop ()
| DoQuery(target) ->
target.Post(Query inbox)
return! loop ()
}
loop ()
)
let init() =
main.Start()
consumer.Start()
producer.Start()
let run() =
for _ in [1..10] do
producer.Post(ProduceOne consumer)
producer.Post(Forward(main,DoQuery consumer))
let query () =
consumer.Post(Query main)
let go() =
init ()
run ()
//query ()
I'm trying to learn the RxJS library. One of the cases I don't quite understand is described in this jsfiddle (code also below).
var A= new Rx.Subject();
var B= new Rx.Subject();
A.onNext(0);
// '.combineLatest' needs all the dependency Observables to get emitted, before its combined signal is emitted.
//
// How to have a combined signal emitted when any of the dependencies change (using earlier given values for the rest)?
//
A.combineLatest( B, function (a,b) { return a+b; } )
.subscribe( function (v) { console.log( "AB: "+ v ); } );
B.onNext("a");
A.onNext(1);
I'd like to get two emits to the "AB" logging. One from changing B to "a" (A already has the value 0). Another from changing A to 1.
However, only changes that occur after a subscribe seem to matter (even though A has a value and thus the combined result could be computed).
Should I use "hot observables" for this, or some other method than .combineLatest?
My problem in the actual code (bigger than this sample) is that I need to make separate initialisations after the subscribes, which cuts stuff in two separate places instead of having the initial values clearly up front.
Thanks
I think you have misunderstood how the Subjects work. Subjects are hot Observables. They do not hold on to values, so if they receive an onNext with no subscribers than that value will be lost to the world.
What you are looking for is a either the BehaviorSubject or the ReplaySubject both of which hold onto past values that re-emit them to new subscribers. In the former case you always construct it with an initial value
//All subscribers will receive 0
var subject = new Rx.BehaviorSubject(0);
//All subscribers will receive 1
//Including all future subscribers
subject.onNext(1);
in the latter you set the number of values to be replayed for each subscription
var subject = new Rx.ReplaySubject(1);
//All new subscribers will receive 0 until the subject receives its
//next onNext call
subject.onNext(0);
Rewriting your example it could be:
var A= new Rx.BehaviorSubject(0);
var B= new Rx.Subject();
// '.combineLatest' needs all the dependency Observables to get emitted, before its combined signal is emitted.
//
// How to have a combined signal emitted when any of the dependencies change (using earlier given values for the rest)?
//
A.combineLatest( B, function (a,b) { return a+b; } )
.subscribe( function (v) { console.log( "AB: "+ v ); } );
B.onNext("a");
A.onNext(1);
//AB: 0a
//AB: 1a
On another note, realizing of course that this is all new to you, in most cases you should not need to use a Subject directly as it generally means that you are trying to wrangle Rx into the safety of your known paradigms. You should ask yourself, where is your data coming from? How is it being created? If you ask those questions enough, following your chain of events back up to the source, 9 out of 10 times you will find that there is probably an Observable wrapper for it.
I'm developing a simple REST application that leverages on RxJava to send requests to a remote server (1). For each incoming request to the REST API a request is sent (using RxJava and RxNetty) to (1). Everything is working fine but now I have a new use case:
In order to not bombard (1) with too many request I need to implement rate limiting. One way to solve this (I assume) would be to add each Observable created when sending a request to (1) into another Observable (2) that does the actual rate-limiting. (2) will then act more or less like a queue and process the outbound requests as fast as possible (but not faster than the rate limit). Here's some pseudo-like code:
Observable<MyResponse> r1 = createRequestToExternalServer() // In thread 1
Observable<MyResponse> r2 = createRequestToExternalServer() // In thread 2
// Somehow send r1 and r2 to the "rate limiter" observable, (2)
rateLimiterObservable.sample(1 / rate, TimeUnit.MILLISECONDS)
How would I use Rx/RxJava to solve this?
I'd use a hot timer along with an atomic counter that keeps track the remaining connection for the given duration:
int rate = 5;
long interval = 1000;
AtomicInteger remaining = new AtomicInteger(rate);
ConnectableObservable<Long> timer = Observable
.interval(interval, TimeUnit.MILLISECONDS)
.doOnNext(e -> remaining.set(rate))
.publish();
timer.connect();
Observable<Integer> networkCall = Observable.just(1).delay(150, TimeUnit.MILLISECONDS);
Observable<Integer> limitedNetworkCall = Observable
.defer(() -> {
if (remaining.getAndDecrement() != 0) {
return networkCall;
}
return Observable.error(new RuntimeException("Rate exceeded"));
});
Observable.interval(100, TimeUnit.MILLISECONDS)
.flatMap(t -> limitedNetworkCall.onErrorReturn(e -> -1))
.take(20)
.toBlocking()
.forEach(System.out::println);
Using reactive extension, it is easy to subscribe 2 times to the same observable.
When a new value is available in the observable, both subscribers are called with this same value.
Is there a way to have each subscriber get a different value (the next one) from this observable ?
Ex of what i'm after:
source sequence: [1,2,3,4,5,...] (infinite)
The source is constantly adding new items at an unknown rate.
I'm trying to execute a lenghty async action for each item using N subscribers.
1st subscriber: 1,2,4,...
2nd subscriber: 3,5,...
...
or
1st subscriber: 1,3,...
2nd subscriber: 2,4,5,...
...
or
1st subscriber: 1,3,5,...
2nd subscriber: 2,4,6,...
I would agree with Asti.
You could use Rx to populate a Queue (Blocking Collection) and then have competing consumers read from the queue. This way if one process was for some reason faster it could pick up the next item potentially before the other consumer if it was still busy.
However, if you want to do it, against good advice :), then you could just use the Select operator that will provide you with the index of each element. You can then pass that down to your subscribers and they can fiter on a modulus. (Yuck! Leaky abstractions, magic numbers, potentially blocking, potentiall side effects to the source sequence etc)
var source = Obserservable.Interval(1.Seconds())
.Select((i,element)=>{new Index=i, Element=element});
var subscription1 = source.Where(x=>x.Index%2==0).Subscribe(x=>DoWithThing1(x.Element));
var subscription2 = source.Where(x=>x.Index%2==1).Subscribe(x=>DoWithThing2(x.Element));
Also remember that the work done on the OnNext handler if it is blocking will still block the scheduler that it is on. This could affect the speed of your source/producer. Another reason why Asti's answer is a better option.
Ask if that is not clear :-)
How about:
IObservable<TRet> SomeLengthyOperation(T input)
{
return Observable.Defer(() => Observable.Start(() => {
return someCalculatedValueThatTookALongTime;
}, Scheduler.TaskPoolScheduler));
}
someObservableSource
.SelectMany(x => SomeLengthyOperation(input))
.Subscribe(x => Console.WriteLine("The result was {0}", x);
You can even limit the number of concurrent operations:
someObservableSource
.Select(x => SomeLengthyOperation(input))
.Merge(4 /* at a time */)
.Subscribe(x => Console.WriteLine("The result was {0}", x);
It's important for the Merge(4) to work, that the Observable returned by SomeLengthyOperation be a Cold Observable, which is what the Defer does here - it makes the Observable.Start not happen until someone Subscribes.
I have two methods that both return an IObservable
IObservable<Something[]> QueryLocal();
and
IObservable<Something[]> QueryWeb();
QueryLocal is always successful. QueryWeb is susceptible to both a timeout and possible web errors.
I wish to implement a QueryLocalAndWeb() that calls both and combines their results.
So far I have:
IObservable<Something[]> QueryLocalAndWeb()
{
var a = QueryLocal();
var b = QueryWeb();
var plan = a.And(b).Then((x, y) => x.Concat(y).ToArray());
return Observable.When(plan).Timeout(TimeSpan.FromSeconds(10), a);
}
However, I'm not sure that it handles the case where QueryWeb yields an error.
In the future I might have a QueryWeb2() that also needs to be taken into account.
So, how do I combine the results from a number of IObservables ignoring the ones that throw errors (or time out)?
I guess OnErrorResumeNext should be able to handle this scenario:
From MSDN:
Continues an observable sequence that is terminated normally or by an
exception with the next observable sequence.
IObservable<Something[]> QueryLocalAndWeb()
{
var a = QueryLocal();
var b = QueryWeb().Timeout(TimeSpan.FromSeconds(10));
return Observable.OnErrorResumeNext(b, a);
}
You can do concat of array by using Aggregation on the returned observable.
I am assuming that both local and web are cold observable i.e they start producing values only when someone subscribes to them.
How about:
var plan = a.And(b).Then((x, y) => x.Concat(y.Catch(Observable.Empty<Something[]>()).ToArray());