I am learning about Akka, and while exploring the API, I came across something kind of curious (at least to me). The tell function is defined directly on the ActorRef class. However, the ask function is declared in the AskSupport trait. I can't think of any good reasons why they needed a seperate trait for AskSupport rather than including ask in the API for ActorRef (and ? in the API of ScalaActorRef). Would anyone care to enlighten me on the reasoning behind this?
The Actor docs gives the author's reasoning:
The ask pattern involves actors as well as futures, hence it is offered as a use pattern rather than a method on ActorRef
The ask pattern includes Future extensions, not just ActorRef extensions. In particular, if you look through the akka.pattern docs, you'll note PipeToSupport and PipeableFuture beyond the extensions to Akka objects.
According to the Akka Docs:
There are performance implications of using ask since something
needs to keep track of when it times out, there needs to be something
that bridges a Promise into an ActorRef and it also needs to be
reachable through remoting. So always prefer tell for performance,
and only ask if you must.
Given this, it makes sense that you'd want to subtly discourage the use of ask by requiring users to explicitly import the facility. This has the added benefit of reducing bloat in the ActorRef API.
Keep in mind that Akka is based on the Erlang system of actors and I believe that tell is the only facility to communicate between actors in that system. I imagine that the Akka guys want to keep the ActorRef in Akka as similar as possible to it's Erlang analog.
One other thing to keep in mind is that ask is simply a pattern of using a tell and then setting up a Future and temporary actor to handle the responding tell to make it look like a native request/response process. But under the hood, it's really just two actors telling back and forth. It's a convention of using tells to create the appearance of request/response and that's why it's setup as a pattern in Akka that can be pulled in when needed as opposed to being part of the ActorRef.
Related
All of the actor system implementations I have seen (primarily using Akka) presume a web app, meaning an HTTP interface that can quite naturally be served by an asynchronous actor system.
But what if I'm writing a desktop app, or a library to be used as a component of a platform-independent app?
I want client subroutines to be able to call val childObj = parentObject.createChild( initParam ) without having to know about my allowed message types, or the actor system in general. eg, Not parentObject ! CreateChild( initParam ), and then handle a response received in another message.
I know I could hide the asynchronous responses behind Futures, but are there other known patterns for a synchronous system handing off computation to a hidden actor system?
(I realize that this will result in a blocking call into the library.)
Desktop app
A lot of things that apply to libraries also apply here, so look at the below section. If nothing else, you can wrap the part of your code that uses Akka as a separate library. One caveat is that, if you're using Swing, you will probably want to use SwingUtilities.invokeLater to get back on the Event Dispatch Thread prior to interacting with a GUI. (Also, don't block that thread. You probably want to use futures to avoid this, so consider designing your library to return futures.)
Library
Your example seems to assume a thin wrapper around your actors, or, at the very least, a bottom-up design where your interface is driven by your implementation details. Instead, design the library in a more top-down manner, first figuring out the library's interface, then (possibly) using Akka as an implementation detail. (This is a good idea for library design in general.) If you've already written something using Akka, don't worry, just design the interface separately from the implementation and stitch the two together. If you do this, you don't need a specific pattern, as the normal pattern of interface design apply regardless of the fact that you are using Akka.
As an example, consider a compiler. The compile method signature might be simple:
def compile(sources: List[File]): List[File] // Returns a list of binaries
No mention of actors here. But this might do:
compileActor ? Compile(sources)
...and block on the result. The main compiler actor might depend on other actors, but there's no reason to expose those through the public API.
This is more of a design and best practices question. I am converting an app to use Actors and Futures. Currently these are the layers (before Akka is in the mix) .
Play Controller -> Service layer -> (Slick) DAOs
Now want to have something like
Play Controller -> Actors ->Services (Now they'll return Futures) ->DAO
In doing so I am finding that since original Service layer had all the methods with required business logic, Actors layer is looking just like a mediator. Wondering if it's okay (from design point of view) to get rid of Service layer now that everything is going to be through Actors?
Play controller->Actors (with business methods) -> business methods calling into DAO (which Service methods were doing before)
Or continue with existing Service layer and use those methods from Akka Actors only? Risk with keeping Service layer as it is, is all Service methods will remain public and free to be called from anywhere else (breaking the pattern ~ if somebody called Service method directly in controller (by passing Actors) or something).
There are 2 approaches to actor-based system design:
Actors are just a multithreading abstraction, e.g. TaskExecutors
Actors are a foundation for business modelling, e.g. GhostActor in a Pac-Man game.
You need to ask yourself which one do you want to follow with your refactoring. And why.
The first option you mentioned (Actors talk to Services via Futures) is a multithreading abstraction. You want to do that when you have just hit a major performance bottleneck. Possibly actors can help, but there are many other tools that can do that.
The second option you mentioned (Actors replace Services) uses actors for business domain modelling. And it's very powerful. You put your logic in actors, which consist of smaller actors, which consist of smaller actors and so on. Each of them represent a tiny bit of your business domain. The smaller the actor the better. There are many advantages of using this approach:
Each of those actors can use internally a different strategy for obtaining and storing information. Some of them may use an HTTP service via Futures, some of them may use actor communication, some of them may be event-sourced.
You have a declarative and human-understandable abstraction you can use in your entire system: the Actor. You just need to switch your brain from thinking about technical obstacles to thinking about business obstacles.
When you follow some simple technical rules, you have scalability built into your system without thinking about it too much. Those rules become a second nature after some time.
Of course, there are also some cons:
There are business domains that cannot be easily modelled with actors.
You are making your system totally dependent on one toolkit.
I hope this can help you somehow. If you want to follow-up on something, just shout it out. Thanks!
What should I use to implement simple event handling in Scala?
I don't want to rely on Scala.Swing APIs and I'm not sure if I should use Actors.
What I need is simple generic over event type handlers and event sources. Concurrency is not a requirement. Aren't Actors too heavy for simple tasks requiring no concurrency?
If you don't want to rely on Scala Swing and you only require publishers and observers, why not roll up your own implementation? This would amount to 2-3, below-10-line Scala traits (depending on whether you also want event buses or not).
If you don't mind a more complex API (especially since you're getting concurrency handling for free), you could try out the Observables out of RxScala. Take a look at the aforementioned Observable, Observer and Subject APIs.
In my case the Akka actor solution was a little bit overkill, so I end up implementing my own event sourcing solution in this open source project.
The persistence layer is a decision for the developer, but I provide practical examples of execution using couchbase.
Take a look in case you consider useful.
https://github.com/politrons/Scalaydrated
I was just comparing different scala actor implementations and now I'm wondering what could have been the motivation to deprecate the existing scala actor implementation in 2.10 and replace the default actor with the Akka implementation? Neither the migration guide nor the first announcement give any explanation.
According to the comparison the two solutions were different enough that keeping both would have been a benefit. Thus, I'm wondering whether there were any major problems with the existing implementation that caused this decision? In other words, was it a technical or a political decision?
I can't but give you a guess answer:
Akka provides a stable and powerful library to work with Actors, along with lots of features that deals with high concurrency (futures, agents, transactional actors, STM, FSM, non-blocking I/O, ...).
Also it implements actors in a safer way than scala's, in that the client code have only access to generic ActorRef. This makes it impossible to interact with actors other than through message-passing.
[edited: As Roland pointed out, this also enables additional features like fault-tolerance through a supervision hierarchy and location transparency: the ability to deploy the actor locally or remotely with no change needed on the client code.
The overall design more closely resembles the original one in erlang.]
Much of the core features were duplicated in scala and akka actors, so a unification seems a most sensible choice (given that the development team of both libraries is now part of the same company, too: Typesafe).
The main gain is avoiding duplication of the same core functionality, which would only create confusion and compatibility issues.
Given that a choice is due, it only remains to decide which would be the standard implementation.
It's evident to me that Akka has more to offer in this respect, being a full-blown framework with many enterprise-level features already included and more to come in the near future.
I can't think of a specific case where scala.actors is capable of accomplishing what akka can't.
p.s. A similar reasoning was made that led to the unification of the standard future/promise implementation in 2.10
The whole scala language and community have to gain from a simplified interface to base language features, instead of a fragmented scene made of different frameworks, each having it's own syntax and model to learn.
The same can't be said for other, more high-level aspects, like web-frameworks, where the developer gains from a richer panorama of available solutions.
After being exposed to scala's Actors and Clojure's Futures, I feel like both languages have excellent support for multi core data processing.
However, I still have not been able to determine the real engineering differences between the concurrency features and pros/cons of the two models. Are these languages complimentary, or opposed in terms of their treatment of concurrent process abstractions?
Secondarily, regarding big data issues, it's not clear wether the scala community continues to support Hadoop explicitly (whereas the clojure community clearly does ). How do Scala developers interface with the hadoop ecosystem?
Some solutions are well solved by agents/actors and some are not. This distinction is not really about languages more than how specific problems fit within general classes of solutions. This is a (very short) comparason of Actors/agents vs. References to try to clarify the point that the tool must fit the concurrency problem.
Actors excel in distributed situation where no data needs to be concurrently modified. If your problem can be expressed purely by passing messages then actors will do the trick. Actors work poorly where they need to modify several related data structures at the same time. The canonical example of this being moving money between bank accounts.
Clojure's refs are a great solution to the problem of many threads needing to modify the same thing at the same time. They excel at shared memory multi-processor systems like today's PCs and Servers. In addition to the Bank account example, Rich Hickey (the author of clojure) uses the example of a baseball game to explain why this is important. If you wanted to use actors to represent a baseball game then before you moved the ball, all the fans would have to send it a message asking it where it was... and if they wanted to watch a player catching the ball things get even more complex.
Clojure has cascalog which makes writing hadoop jobs look a lot like writing clojure.
Actors provide a way of handling the potential interleaving and synchronization control that inevitably comes when trying to get multiple threads to work together. Each actor has a queue of messages that it processes in order one at a time so as to avoid the need to include explicit locks. In this case a Future provides a way of waiting for a response from an actor.
As far as Hadoop is concerned, Twitter just released a library specifically for Hadoop called Scalding but as long as the library is written for the JVM, it should work with either language.