How to implement scheduler inheritance when another source tees into a pipeline? - rx-java2

I'd like to implement "scheduler inheritance" as part of an RxJava2-using API. I want consumers of my API to be able to think in terms of building a single processing chain rather than a DAG, even though, internally, new events are being teed in as an implementation detail.
I don't see any way to do the equivalent of:
observable
.flatMap {
val scheduler = Schedulers().current!!
someOtherObservable
.observeOn(scheduler)
}
Is there some other way to inherit a scheduler?
More Context
I have a pipeline like:
compositeDisposable += Environment
.lookupDeviceInfo()
.subscribeOn(scheduler)
.flatMap { deviceInfo ->
Device(deviceId = deviceInfo.id)
.sendCommand()
.subscribe(
{ result -> /*process result*/ },
{ e -> /*log error*/ })
To the consumer, this looks like they pushed all the work onto the specified scheduler: events from lookupDeviceInfo() get vectored to a worker from that scheduler, and they expect to stick on that worker.
In practice, they have a bug, because sendCommand() tees in events from another event source as an implementation detail:
sendMessageSingle(deviceId, payload)
.flatMap { sentMessageId ->
responseObservable
.filter { it.messageId == sentMessageId }
.firstOrError()
}
Events stream in from responseObservable, but none of those events get vectored to the specified scheduler, because that got applied upstream.

From the comments:
Returning to the same scheduler thread requires you to provide a single-threaded scheduler (i.e., Schedulers.from(Executor), Schedulers.single(), etc.). There is no current scheduler because there is no guarantee some code will run on any of the standard schedulers; they could be executing on arbitrary threads of the system, other frameworks, etc. Thus, you have to route the signals back to the desired thread via observeOn.
I'm not concerned about landing on the same thread, just the same scheduler. (Even changing Workers may be fine, so long as the new worker is vended by the same scheduler as the old.)
Then the suggestion still applies and you can forego the "single-threaded" property I mentioned.

Related

Buffering slow subscribers in Swift combine

I'm currently struggling to get a desired behaviour when using Combine. I've previously used RX framework and believe (from what I remember) that the described scenario is possible by specifying backpressure strategies for buffering.
So the issue I have is that I have a publisher that publishes values very rapidly, I have two subscribers to it, one which can react just as fast as the values are published (cool beans), but then a second subscriber that runs some CPU expensive processing.
I know in order to support the second slower subscriber that I need to afford buffering of values, but don't seem to be be able to make this happen, here is what I have so far:
let subject = PassthroughSubject<Int, Never>()
// publish some values
Task {
for i in 0... {
subject.send(i)
}
}
subject
.print("fast")
.sink { _ in }
subject
.map { n -> Int in
sleep(1) // CPU intensive work here
return n
}
.print("slow")
.sink { _ in }
Originally I thought I could use .buffer(..) on the slow subscriber but this doesn't appear to be the use case, what seems to happen is that the subject dispatches to each subscriber and only after the subscriber finishes, does it then demand more from the publisher, and in this case that seems to block the .send(..) call of the publishing loop.
Any advice would be greatly appreciated 👍

interrupt scala parallel collection

Is there any way to interrupt a parallel collection computation in Scala?
Example:
val r = new Runnable {
override def run(): Unit = {
(1 to 3).par.foreach { _ => Thread.sleep(5000000) }
}
}
val t = new Thread(r)
t.start()
Thread.sleep(300) // let them spin up
t.interrupt()
I'd expect t.interrupt to interrupt all threads spawned by par, but this is not happening, it keeps spinning inside ForkJoinTask.externalAwaitDone. Looks like that method clears the interrupted status and keeps waiting for the spawned threads to finish.
This is Scala 2.12
The thread that you t.start() is responsible just for starting parallel computations and to wait and gather the result.
It is not connected to threads that compute operations. Usually, it runs on default forkJoinPool that independent from the thread that submits computation tasks.
If you want to interrupt the computation, you can use custom execution back-end (like manually created forkJoinPool or a threadPool), and then shut it down. You can read about that here.
Or you can provide a callback from the computation.
But all those approaches are not so good for such a case.
If you producing a production solution or your case is complex and critical for the app, you probably should use something that has cancellation by design. Like Monix.Task or CancellableFuture.
Or at least use Future and cancel it with workarounds.

Is sending futures in Akka messages OK?

I'm working on implementing a small language to send tasks to execution and control execution flow. After the sending a task to my system, the user gets a future (on which it can call a blocking get() or flatMap() ). My question is: is it OK to send futures in Akka messages?
Example: actor A sends a message Response to actor B and Response contains a future among its fields. Then at some point A will fulfill the promise from which the future was created. After receiving the Response, B can call flatMap() or get() at any time.
I'm asking because Akka messages should be immutable and work even if actors are on different JVMs. I don't see how my example above can work if actors A and B are on different JVMs. Also, are there any problems with my example even if actors are on same JVM?
Something similar is done in the accepted answer in this stackoverflow question. Will this work if actors are on different JVMs?
Without remoting it's possible, but still not advisable. With remoting in play it won't work at all.
If your goal is to have an API that returns Futures, but uses actors as the plumbing underneath, one approach could be that the API creates its own actor internally that it asks, and then returns the future from that ask to the caller. The actor spawned by the API call is guaranteed to be local to the API instance and can communicate with the rest of the actor system via the regular tell/receive mechanism, so that there are no Futures sent as messages.
class MyTaskAPI(actorFactory: ActorRefFactory) {
def doSomething(...): Future[SomethingResult] = {
val taskActor = actorFactory.actorOf(Props[MyTaskActor])
taskActor ? DoSomething(...).mapTo[SomethingResult]
}
}
where MyTaskActor receives the DoSomething, captures the sender, sends out the request for task processince and likely becomes a receiving state for SomethingResult which finally responds to the captured sender and stops itself. This approach creates two actors per request, one explicitly, the MyTaskActor and one implicitly, the handler of the ask, but keeps all state inside of actors.
Alternately, you could use the ActorDSL to create just one actor inline of doSomething and use a captured Promise for completion instead of using ask:
class MyTaskAPI(system: System) {
def doSomething(...): Future[SomethingResult] = {
val p = Promise[SomethingResult]()
val tmpActor = actor(new Act {
become {
case msg:SomethingResult =>
p.success(msg)
self.stop()
}
}
system.actorSelection("user/TaskHandler").tell(DoSomething(...), tmpActor)
p.future
}
}
This approach is a bit off the top of my head and it does use a shared value between the API and the temp actor, which some might consider a smell, but should give an idea how to implement your workflow.
If you're asking if it's possible, then yes, it's possible. Remote actors are basically interprocess communication. If you set everything up on both machines to a state where both can properly handle the future, then it should be good. You don't give any working example so I can't really delve deeper into it.

Akka: actor spawning vs filling up mailboxes

If you want to execute long running computations concurrently (on a single machine), Akka actors can help.
One approach is to spawn a new actor for each piece of work. Something like
while(true) {
val actor = system.actorOf(Props[ProcessingActor])
(actor ? msg).map {
...
system.stop(actor)
}
}
A second idea is to configure a set number of actors behind a router. And then send all messages to the router.
val router = system.actorOf(Props[ProcessingActor].withRouter(RoundRobinRouter(nrOfInstances = 5)))
while(true) {
(router ? msg).map { ... }
}
I wonder, which is better if the system is overloaded (rate of incoming messages is higher than processing rate)?
Which will last longer? And will both eventually blow up the system with an OOMError?
Before you create a new Actor for each task you could also just use a Future. It really depends on what you want to achieve. To get as much work done with the least memory usage, you should use the actor/router approach. Futures are more expensive, because for each task would create a new instance of Future and Promise. But it really depends on your use case, which approach is the better. I just wouldn't create a lot of actors, when there really is no need for them. Especially as system.actorOf always creates a new error kernel.

Easiest way to do idle processing in a Scala Actor?

I have a scala actor that does some work whenever a client requests it. When, and only when no client is active, I would like the Actor to do some background processing.
What is the easiest way to do this? I can think of two approaches:
Spawn a new thread that times out and wakes up the actor periodically. A straight forward approach, but I would like to avoid creating another thread (to avoid the extra code, complexity and overhead).
The Actor class has a reactWithin method, which could be used to time out from the actor itself. But the documentation says the method doesn't return. So, I am not sure how to use it.
Edit; a clarification:
Assume that the background task can be broken down into smaller units that can be independently processed.
Ok, I see I need to put my 2 cents. From the author's answer I guess the "priority receive" technique is exactly what is needed here. It is possible to find discussion in "Erlang: priority receive question here at SO". The idea is to accept high priority messages first and to accept other messages only in absence of high-priority ones.
As Scala actors are very similar to Erlang, a trivial code to implement this would look like this:
def act = loop {
reactWithin(0) {
case msg: HighPriorityMessage => // process msg
case TIMEOUT =>
react {
case msg: HighPriorityMessage => // process msg
case msg: LowPriorityMessage => // process msg
}
}
}
This works as follows. An actor has a mailbox (queue) with messages. The receive (or receiveWithin) argument is a partial function and Actor library looks for a message in a mailbox which can be applied to this partial function. In our case it would be an object of HighPriorityMessage only. So, if Actor library finds such a message, it applies our partial function and we are processing a message of high priority. Otherwise, reactWithin with timeout 0 calls our partial function with argument TIMEOUT and we immediately try to process any possible message from the queue (as it waits for a message we cannot exclude a possiblity to get HighPriorityMessage).
It sounds like the problem you describe is not well suited to the actor sub-system. An Actor is designed to sequentially process its message queue:
What should happen if the actor is performing the background work and a new task arrives?
An actor can only find out about this is it is continuously checking its mailbox as it performs the background task. How would you implement this (i.e. how would you code the background tasks as a unit of work so that the actor could keep interrupting and checking the mailbox)?
What should happen if the actor has many background tasks in its mailbox in front of the main task?
Do these background tasks get thrown away, or sent to another actor? If the latter, how can you prevent CPU time being given to that actor to perform the tasks?
All in all, it sounds much more like you need to explore some grid-style software that can run in the background (like Data Synapse)!
Just after asking this question I tried out some completely whacky code and it seems to work fine. I am not sure though if there is a gotcha in it.
import scala.actors._
object Idling
object Processor extends Actor {
start
import Actor._
def act() = {
loop {
// here lie dragons >>>>>
if (mailboxSize == 0) this ! Idling
// <<<<<<
react {
case msg:NormalMsg => {
// do the normal work
reply(answer)
}
case Idling=> {
// do the idle work in chunks
}
case msg => println("Rcvd unknown message:" + msg)
}
}
}
}
Explanation
Any code inside the argument of loop but before the call to react seems to get called when the Actor is about to wait for a message. I am sending a Idling message to self here. In the handler for this message I ensure that the mailbox-size is 0, before doing the processing.