Rxjava how to generify transformers? - rx-java2

I've found myself writing a lot of transformers to generify some stream operations such as retry, apply schedulers, etc. Which made me write a lot of code duplication because each stream type(single, completable, etc.) has it's own Transformer so I had to implement 4 different transformers which do exactly the same. That's also led to duplication in tests. Is there a way to generify transformers?

Related

Which operations are easier to implement in pull vs push models in streaming libraries (and vice versa)?

Author of Monix says in comparison of Monix with FS2
Where FS2 is better:
the model of communication between producers and consumers is pull-based, sometimes making it easier to implement new operators
Where Monix is better:
the model of communication between producers and consumers is push-based (with back-pressure) and this makes it inherently more efficient
Few questions arise:
Which operations are easier to implement in pull-based model?
Are there operations which are harder to implement this way?
Why pull-based approach is inherently slower?

Should Akka Actors do real processing tasks?

I'm writing an application that reads relatively large text files, validates and transforms the data (every line in a text file is an own item, there are around 100M items/file) and creates some kind of output. There already exists a multihreaded Java application (using BlockingQueue between Reading/Processing/Persisting Tasks), but I want to implement a Scala application that does the same thing.
Akka seems to be a very popular choice for building concurrent applications. Unfortunately, due to the asynchronous nature of actors, I still don't understand what a single actor can or can't do, e.g. if I can use actors as traditional workers that do some sort of calculation.
Several documentations say that Actors should never block and I understand why. But the given examples for blocking code always only mention such things as blocking file/network IO.. things that make the actor waiting for a short period of time which is of course a bad thing.
But what if the actor is "blocking" because it actually does something useful instead of waiting? In my case, the processing and transformation of a single line/item of text takes 80ms which is quite a long time (pure processing, no IO involved). Can this work be done by an actor directly or should I use a Future instead (but then, If I have to use Futures anyway, why use Akka in the first place..)?.
The Akka docs and examples show that work can be done directly by actors. But it seems that the authors only do very simplistic work (such as calling filter on a String or incrementing a counter and that's it). I don't know if they do this to keep the docs simple and concise or because you really should not do more that within an actor.
How would you design an Akka-based application for my use case (reading text file, processing every line which takes quite some time, eventually persisting the result)? Or is this some kind of problem that does not suit to Akka?
It all depends on the type of an actor.
I use this rule of thumb: if you don't need to talk to this actor and this actor does not have any other responsibilities, then it's ok to block in it doing actual work. You can treat it as a Future and this is what I would call a "worker".
If you block in an actor that is not a leaf node (worker), i.e. work distributor then the whole system will slow down.
There are a few patterns that involve work pulling/pushing or actor per request model. Either of those could be a fit for your application. You can have a manager that creates an actor for each piece of work and when the work is finished actor sends result back to manager and dies. You can also keep an actor alive and ask for more work from that actor. You can also combine actors and Futures.
Sometimes you want to be able to talk to a worker if your processing is more complex and involves multiple stages. In that case a worker can delegate work yet to another actor or to a future.
To sum-up don't block in manager/work distribution actors. It's ok to block in workers if that does not slow your system down.
disclaimer: by blocking I mean doing actual work, not just busy waiting which is never ok.
Doing computations that take 100ms is fine in an actor. However, you need to make sure to properly deal with backpressure. One way would be to use the work-pulling pattern, where your CPU bound actors request new work whenever they are ready instead of receiving new work items in a message.
That said, your problem description sounds like a processing pipeline that might benefit from using a higher level abstraction such as akka streams. Basically, produce a stream of file names to be processed and then use transformations such as map to get the desired result. I have something like this in production that sounds pretty similar to your problem description, and it works very well provided the data used by the individual processing chunks is not too large.
Of course, a stream will also be materialized to a number of actors. But the high level interface will be more type-safe and easier to reason about.

Event based design - Futures, Promises vs Akka Persistence

I have multiple use cases which require predefined events to be fired based on a certain user actions.
e.g. let's say when NewUser is created in the application, it'll have to call CreateUserInWorkflowSystem and FireEmailToTheUser asynchronously. There are many other business cases of this nature where events will be predefined based on a usecase. I can use Promises/Futures to model these events as below
if 'NewUser' then
call `CreateUserInWorkflowSystem` (which will be Future based API)
call `FireEmailToTheUser` (which will be Future based API)
if 'FileImport' then
call `API3` (which will be Future based call)
call `API4` (which will be Future based call)
All those Future calls will have to log failures somewhere so failed calls can be retried etc. Note NewUser call won't be waiting for those Futures (events per say) to complete.
That was using plain Futures/Promises APIs. However I am thinking Akka Persistence will be an appropriate fit here and blocking calls can still run into Futures. With Akka persistence, handling failure will be easy as it provides it out of box etc. I understand Akka persistence is still in experimental stage but that doesn't seem to be a big concern as typesafe generally keeps these new frameworks into experimental state before promoting into future release etc. (same was true with Macros). Given these requirements do you think Futures/Promises or Akka persistence is a better fit here?
This is an opinion based question - not the best type to ask on SO. Anyway, trying to answer.
It really depends what you are more comfortable with and what your requirements are. Do you need to scale the system later beyond a single JVM - use Akka. Do you want to keep it more simple - use Futures.
If you use Futures you can store all state and actions to execute in a job queue/db. It's quite reasonable.
If you use Akka Persistence then obviously it will help you with persistence. Akka will help to perform supervison, recovery and retries easier. If your CreateUserInWorkflowSystem action fails result is propagated to supervising actor which probably restarts the failed actor and makes it retry for N times. If your supervising actor fails then his supervisor will do the right thing, or eventually the whole app will crash which is good. With Futures you would have to implement this mechanism yourself and make sure that application can crash when needed.
If you have completely independent actions then Futures and Actors sound about the same. If you have to chain actions and compose them, then using Futures will be a somewhat more natural thing to do: for comprehensions, etc. In Akka you would have to wait for a message and based on a type of a message perform next action.
Try to mock a simple implementation using both and compare what you like/dislike given your particular application requirements. Overall, both choices are good, but I'm slightly leaning towards actors in this case.

Concurrency overview

As Scala provides a great suite to deal with concurrency (Akka, parallel collections, futures and so on) it also leaves me a bit puzzled. Is there some kind of guide line when to use what? Some kind of best practices?
First of all, concurrency != parallelism. The latter can be employed for problems which you reason about essentially in a sequential manner, but which can be efficiently partitioned into chunks which can be independently processed (before being put together again in the end). For example, mapping and filtering a collection, that would be a scenario for parallel collections.
Some others have reasoned about actors versus futures. In short, actors are more OO in the sense that each actor can encapsulate its own internal state, they are more like black boxes. Also actor concurrency is nondeterministic, whereas dataflow and futures are deterministic. Actors are a natural choice when you want to distribute tasks across multiple computers. Actors can accept multiple types of messages, whereas futures allow function composition over one specific type. (This is simplified, as Akka now has typed channels, which I guess makes it more composable). Actors would be suitable for services which wait for requests, whereas futures can be thought of as lazy answers.
If you have multiple concurrent threads, software transactional memory (STM) is also a useful abstraction. STM doesn't manage threadpools or concurrent tasks by itself, but when combined with them, it handles mutable state in a safe manner.

Reactive services and scalability without Akka

I've read Reactive Manifesto for a couple of times and tried to wrap my head around all this reactive, async, non-blocking stuff. It's clear how to make scalable systems on top of Actors, but will i get the same effect, in term of scalability, asynchronous execution execution if i would actively use scala Future all over my code, every method would either accept or return a Future. Would such service be scalable and responsive?. Lets say that in this question i'm not much interested in event-driven and resilient part of the service.
This answer reflects my experience using Akka in Scala and its Actor and Future types. I don't consider myself an expert, yet, but I've done a few systems using these libraries and feel I am beginning to develop a sense for how to use them.
The choice of using an Actor vs. a Future is about the nature of the concurrency you require. Futures compose monadically and in DAG-structured graphs, making certain computational structures very elegantly composed. But they have severe limitations. Basically, the computation that is performed concurrently within a Future must be self-contained (or at most reference only immutable external state) or you have not solved the problem of inter-thread interference with all the attendant risks such as deadlock, race conditions, unpredictable behavior or data structure corruption.
When your computation involves long-lived, mutable state, an Actor encapsulates that securely preventing corruption and race conditions. On the other hand, actors are not composable, but they do provide a lot of flexibility in how you construct networks of interacting computations. This is true only providing you don't limit actor responses to always and only sending them (the response to a request) back to the actor from which the request was received. If you only send responses to the requesting actor, you're limited to a tree-structured pattern of inter-actor interaction.
In any real, non-trivial system, you're very likely to use both formalisms, Future and Actor.