Is Flow API replacing Observer and Observable - reactive-programming

In Java 9 does Flow API replace Observer and Observable? If not, what does?

The new Flow API is designed as a common denominator for reactive stream libraries like RxJava and Reactive X. Building on Java 9, they can have their types extend the new interfaces (or so the thought goes). While it is of course charming to use the API inside the JDK, that is not the case in Java 9 and there are no concrete plans to introduce it (to the best of my knowledge).
Regarding Observer and Observable the issue which triggered the deprecation states:
Application developers should consider using java.beans for a richer change notification model. Or they should consider constructs in java.util.concurrent such as queues or semaphores to pass messages among threads, with reliable ordering and synchronization properties.
These are recommendations for application developers for writing new code. It gives no advice on updating existing code or what to do inside the JDK. I guess the reason for that is that both cases are supposed to stay as they are.
Note that Java does not use #Deprecated to necessarily mean "will be removed". Instead it can also mean "use better alternatives" and I think that is the case here. So to answer your question in a few words:
In Java 9 does Flow API replace Observer and Observable
No.
and if it doesn't what does.
Nothing.

Related

Are there patterns for writing a library that hides its actor system implementation?

All of the actor system implementations I have seen (primarily using Akka) presume a web app, meaning an HTTP interface that can quite naturally be served by an asynchronous actor system.
But what if I'm writing a desktop app, or a library to be used as a component of a platform-independent app?
I want client subroutines to be able to call val childObj = parentObject.createChild( initParam ) without having to know about my allowed message types, or the actor system in general. eg, Not parentObject ! CreateChild( initParam ), and then handle a response received in another message.
I know I could hide the asynchronous responses behind Futures, but are there other known patterns for a synchronous system handing off computation to a hidden actor system?
(I realize that this will result in a blocking call into the library.)
Desktop app
A lot of things that apply to libraries also apply here, so look at the below section. If nothing else, you can wrap the part of your code that uses Akka as a separate library. One caveat is that, if you're using Swing, you will probably want to use SwingUtilities.invokeLater to get back on the Event Dispatch Thread prior to interacting with a GUI. (Also, don't block that thread. You probably want to use futures to avoid this, so consider designing your library to return futures.)
Library
Your example seems to assume a thin wrapper around your actors, or, at the very least, a bottom-up design where your interface is driven by your implementation details. Instead, design the library in a more top-down manner, first figuring out the library's interface, then (possibly) using Akka as an implementation detail. (This is a good idea for library design in general.) If you've already written something using Akka, don't worry, just design the interface separately from the implementation and stitch the two together. If you do this, you don't need a specific pattern, as the normal pattern of interface design apply regardless of the fact that you are using Akka.
As an example, consider a compiler. The compile method signature might be simple:
def compile(sources: List[File]): List[File] // Returns a list of binaries
No mention of actors here. But this might do:
compileActor ? Compile(sources)
...and block on the result. The main compiler actor might depend on other actors, but there's no reason to expose those through the public API.

Why isn't ask defined directly on ActorRef for Akka?

I am learning about Akka, and while exploring the API, I came across something kind of curious (at least to me). The tell function is defined directly on the ActorRef class. However, the ask function is declared in the AskSupport trait. I can't think of any good reasons why they needed a seperate trait for AskSupport rather than including ask in the API for ActorRef (and ? in the API of ScalaActorRef). Would anyone care to enlighten me on the reasoning behind this?
The Actor docs gives the author's reasoning:
The ask pattern involves actors as well as futures, hence it is offered as a use pattern rather than a method on ActorRef
The ask pattern includes Future extensions, not just ActorRef extensions. In particular, if you look through the akka.pattern docs, you'll note PipeToSupport and PipeableFuture beyond the extensions to Akka objects.
According to the Akka Docs:
There are performance implications of using ask since something
needs to keep track of when it times out, there needs to be something
that bridges a Promise into an ActorRef and it also needs to be
reachable through remoting. So always prefer tell for performance,
and only ask if you must.
Given this, it makes sense that you'd want to subtly discourage the use of ask by requiring users to explicitly import the facility. This has the added benefit of reducing bloat in the ActorRef API.
Keep in mind that Akka is based on the Erlang system of actors and I believe that tell is the only facility to communicate between actors in that system. I imagine that the Akka guys want to keep the ActorRef in Akka as similar as possible to it's Erlang analog.
One other thing to keep in mind is that ask is simply a pattern of using a tell and then setting up a Future and temporary actor to handle the responding tell to make it look like a native request/response process. But under the hood, it's really just two actors telling back and forth. It's a convention of using tells to create the appearance of request/response and that's why it's setup as a pattern in Akka that can be pulled in when needed as opposed to being part of the ActorRef.

Event handling in Scala

What should I use to implement simple event handling in Scala?
I don't want to rely on Scala.Swing APIs and I'm not sure if I should use Actors.
What I need is simple generic over event type handlers and event sources. Concurrency is not a requirement. Aren't Actors too heavy for simple tasks requiring no concurrency?
If you don't want to rely on Scala Swing and you only require publishers and observers, why not roll up your own implementation? This would amount to 2-3, below-10-line Scala traits (depending on whether you also want event buses or not).
If you don't mind a more complex API (especially since you're getting concurrency handling for free), you could try out the Observables out of RxScala. Take a look at the aforementioned Observable, Observer and Subject APIs.
In my case the Akka actor solution was a little bit overkill, so I end up implementing my own event sourcing solution in this open source project.
The persistence layer is a decision for the developer, but I provide practical examples of execution using couchbase.
Take a look in case you consider useful.
https://github.com/politrons/Scalaydrated

why is scala actors deprecated in 2.10?

I was just comparing different scala actor implementations and now I'm wondering what could have been the motivation to deprecate the existing scala actor implementation in 2.10 and replace the default actor with the Akka implementation? Neither the migration guide nor the first announcement give any explanation.
According to the comparison the two solutions were different enough that keeping both would have been a benefit. Thus, I'm wondering whether there were any major problems with the existing implementation that caused this decision? In other words, was it a technical or a political decision?
I can't but give you a guess answer:
Akka provides a stable and powerful library to work with Actors, along with lots of features that deals with high concurrency (futures, agents, transactional actors, STM, FSM, non-blocking I/O, ...).
Also it implements actors in a safer way than scala's, in that the client code have only access to generic ActorRef. This makes it impossible to interact with actors other than through message-passing.
[edited: As Roland pointed out, this also enables additional features like fault-tolerance through a supervision hierarchy and location transparency: the ability to deploy the actor locally or remotely with no change needed on the client code.
The overall design more closely resembles the original one in erlang.]
Much of the core features were duplicated in scala and akka actors, so a unification seems a most sensible choice (given that the development team of both libraries is now part of the same company, too: Typesafe).
The main gain is avoiding duplication of the same core functionality, which would only create confusion and compatibility issues.
Given that a choice is due, it only remains to decide which would be the standard implementation.
It's evident to me that Akka has more to offer in this respect, being a full-blown framework with many enterprise-level features already included and more to come in the near future.
I can't think of a specific case where scala.actors is capable of accomplishing what akka can't.
p.s. A similar reasoning was made that led to the unification of the standard future/promise implementation in 2.10
The whole scala language and community have to gain from a simplified interface to base language features, instead of a fragmented scene made of different frameworks, each having it's own syntax and model to learn.
The same can't be said for other, more high-level aspects, like web-frameworks, where the developer gains from a richer panorama of available solutions.

How do you define an interface, knowing that it should be immutable once published?

Anyone who develops knows that code is a living thing, so how do you go about defining a "complete" interface when considerable functionality may not have been recognised before the interface is published?
Test it a lot. I've never encountered a panacea for this particular problem - there are different strategies depending on the particular needs of the consumers and the goals of the project - for example, are you Microsoft shipping the ASP.NET MVC framework, or are you building an internal LoB application? But distilled to its simplest, you can never go wrong by testing.
By testing, I mean using the interface to implement functionality. You are testing the contract to see if it can fulfill the needs. Come up with as many different possible uses for the interface you can think of, and implement them as far as you can go. Whiteboard the rest, and it should become clear what's missing. I'd say for a given "missing member", if you don't hit it within 3-5 iterations, you probably won't need it.
Version Numbers.
Define a "Complete Interface". Call it version 1.0.
Fix the problems. Call it version 2.0.
They're separate. They overlap in functionality, but they're separate.
Yes, you increase the effort to support both. That is, until you deprecate 1.0, and -- eventually -- stop support.
Just make you best reasonable guess of the future, and if you will need more create second version of your interface.
You cannot do it in a one-shot release. You need feedback.
What you can do is first make a clean interface that provide all the functionalities your library should provide; then expose it to your user base for real-world usage; then use the feedbacks as guide to update your interface -without adding features other than helper function/classes- until it starts to be stable on the interface.
You cannot rely only on experience/good-practice. It really helps but it's never enough. You simply need feedback.
Make sure the interface, or the interface technology (such as RPC, COM, CORBA, etc), has a well-defined mechanism for upgrades and enhanced interfaces.
For example, Microsoft frequently has MyInterface, followed by MyInterfaceEx, followed by MyInterfaceEx2, etc, etc.
Other systems have a means to query and negotiate for different versions of the interface (see DirectX, for one).