I was just comparing different scala actor implementations and now I'm wondering what could have been the motivation to deprecate the existing scala actor implementation in 2.10 and replace the default actor with the Akka implementation? Neither the migration guide nor the first announcement give any explanation.
According to the comparison the two solutions were different enough that keeping both would have been a benefit. Thus, I'm wondering whether there were any major problems with the existing implementation that caused this decision? In other words, was it a technical or a political decision?
I can't but give you a guess answer:
Akka provides a stable and powerful library to work with Actors, along with lots of features that deals with high concurrency (futures, agents, transactional actors, STM, FSM, non-blocking I/O, ...).
Also it implements actors in a safer way than scala's, in that the client code have only access to generic ActorRef. This makes it impossible to interact with actors other than through message-passing.
[edited: As Roland pointed out, this also enables additional features like fault-tolerance through a supervision hierarchy and location transparency: the ability to deploy the actor locally or remotely with no change needed on the client code.
The overall design more closely resembles the original one in erlang.]
Much of the core features were duplicated in scala and akka actors, so a unification seems a most sensible choice (given that the development team of both libraries is now part of the same company, too: Typesafe).
The main gain is avoiding duplication of the same core functionality, which would only create confusion and compatibility issues.
Given that a choice is due, it only remains to decide which would be the standard implementation.
It's evident to me that Akka has more to offer in this respect, being a full-blown framework with many enterprise-level features already included and more to come in the near future.
I can't think of a specific case where scala.actors is capable of accomplishing what akka can't.
p.s. A similar reasoning was made that led to the unification of the standard future/promise implementation in 2.10
The whole scala language and community have to gain from a simplified interface to base language features, instead of a fragmented scene made of different frameworks, each having it's own syntax and model to learn.
The same can't be said for other, more high-level aspects, like web-frameworks, where the developer gains from a richer panorama of available solutions.
Related
I agonized over finding a use case where we need mutexes and other synchronization primitives like semaphores when either of those concurrency paradigms is employed but I couldn't find one. Do you think old concurrency primitives are still relevant? And if it is so, can you provide an example that proves helpful?
For those who do not know CSP, it is the paradigm employed by Golang and Kotlin.
Most of the awkwardness arises from communication by sharing in a multithreaded environment. So if we ban this, we can get rid of many issues.
I'm following the Functional Reactive Programming in Scala course on Coursera and we deal with RxScala Observables (based on RxJava).
As far as I know, the Play Iteratee's library looks a bit like RxScala Observables, where Observables a bit like Enumerators and Observers are bit like Iteratees.
There's also the Scalaz Stream library, and maybe some others?
So I'd like to know the main differences between all these libraries.
In which case one could be better than another?
PS: I wonder why Play Iteratees library has not been choosed by Martin Odersky for his course since Play is in the Typesafe stack. Does it mean Martin prefers RxScala over Play Iteratees?
Edit: the Reactive Streams initiative has just been announced, as an attempt to standardize a common ground for achieving statically typed, high-performance, low latency, asynchronous streams of data with built-in non-blocking back pressure
PS: I wonder why Play Iteratees library has not been choosed by Martin
Odersky for his course since Play is in the Typesafe stack. Does it
mean Martin prefers RxScala over Play Iteratees?
I'll answer this. The decision of which streaming API's to push/teach is not one that has been made just by Martin, but by Typesafe as a whole. I don't know what Martin personally prefers (though I have heard him say iteratees are just too hard for newcomers), but we at Typesafe think that Iteratees require too high a learning curve to teach them to newcomers in asynchronous IO.
At the end of the day, the choice of streaming library really comes down to your use case. Play's iteratees library handles practically every streaming use case in existence, but at a cost of a very difficult to learn API (even seasoned Haskell developers often struggle with iteratees), and also some loss in performance. Other APIs handle less use cases, RX for example doesn't (currently) handle back pressure, and very few of the other APIs are suitable for simple streamed parsing. But streamed parsing is actually a pretty rare use case for end users, in most cases it suffices to simply buffer then parse. So Typesafe has chosen APIs that are easy to learn and meet the majority of the most common use cases.
Iteratees and Stream aren't really that similar to RxJava. The crucial difference is that they are concerned with resource safety (that is, closing files, sockets, etc. once they aren't needed anymore), which requires feedback (Iteratees can tell Enumerators they are done, but Observers don't tell anything to Observables) and makes them significantly more complex.
At a couple of places there is state that akka is somehow "real-time". E.g.:
http://doc.akka.io/docs/akka/2.0/intro/what-is-akka.html
Unfortunately I was not able to find a deeper explanation in which way akka is "real-time". So this is the question:
In which way is akka real-time?
I assume akka is not really a real-time computing system in the sense of the following definition, isn't it?: https://en.wikipedia.org/wiki/Real-time_computing
No language built on the JVM can be real-time in the sense that it's guaranteed to react within a certain amount of time unless it is using a JVM that supports real-time extensions (and takes advantage of them). It just isn't technically possible--and Akka is no exception.
However, Akka does provide support for running things quickly and with pretty good timing compared to what is possible. And in the docs, the other definitions of real-time (meaning online, while-running, with-good-average-latency, fast-enough-for-you-not-to-notice-the-delay, etc.) may be used on occasion.
Since akka is a message driven system, the use of real-time relates to one of the definition of the wikipedia article you mention in the domain of data transfer, media processing and enterprise systems, the term is used to mean 'without perceivable delay'.
"real time" here equates to "going with the flow": events/messages are efficiently processed/consumed as they are produced (in opposition to "batch processing").
Akka can be a foundation for a soft real-time system, but not for a hard one, because of the limitations of the JVM. If you scroll a bit down in the Wikipedia article, you will find the section "Criteria for real-time computing", and there is a nice explanation about the different "real-timeness" criteria.
systems that are subject to a "real-time constraint"— e.g. operational
deadlines from event to system response.
en.wikipedia.org/wiki/Real-time_computing
The akka guys might be reffering to features like futures that allow you to add a time constraint on expectations from a computation.
Also the clustering model of akka may be used to mean an online system which is real-time(Abstracted so as to look like its running locally).
My take is that the Akka platform can support a form of real-time constraint by delivering responsive applications through the use of (I'm quoting here):
Asynchronous, non-blocking and highly performant event-driven programming model
Fault tolerance through supervisor hierarchies with “let-it-crash” semantics
Definition of time-out policies in the response delivery
As already said, all these features combined provides a platform with a form of response time guarantee, especially compared to mainstream applications and tools available nowadays on the JVM.
It's still arguable to claim that Akka could be strictly defined as a real-time computing system, as per wikipedia's definition.
For such claims to be proven, you would better refer to the Akka team itself.
After being exposed to scala's Actors and Clojure's Futures, I feel like both languages have excellent support for multi core data processing.
However, I still have not been able to determine the real engineering differences between the concurrency features and pros/cons of the two models. Are these languages complimentary, or opposed in terms of their treatment of concurrent process abstractions?
Secondarily, regarding big data issues, it's not clear wether the scala community continues to support Hadoop explicitly (whereas the clojure community clearly does ). How do Scala developers interface with the hadoop ecosystem?
Some solutions are well solved by agents/actors and some are not. This distinction is not really about languages more than how specific problems fit within general classes of solutions. This is a (very short) comparason of Actors/agents vs. References to try to clarify the point that the tool must fit the concurrency problem.
Actors excel in distributed situation where no data needs to be concurrently modified. If your problem can be expressed purely by passing messages then actors will do the trick. Actors work poorly where they need to modify several related data structures at the same time. The canonical example of this being moving money between bank accounts.
Clojure's refs are a great solution to the problem of many threads needing to modify the same thing at the same time. They excel at shared memory multi-processor systems like today's PCs and Servers. In addition to the Bank account example, Rich Hickey (the author of clojure) uses the example of a baseball game to explain why this is important. If you wanted to use actors to represent a baseball game then before you moved the ball, all the fans would have to send it a message asking it where it was... and if they wanted to watch a player catching the ball things get even more complex.
Clojure has cascalog which makes writing hadoop jobs look a lot like writing clojure.
Actors provide a way of handling the potential interleaving and synchronization control that inevitably comes when trying to get multiple threads to work together. Each actor has a queue of messages that it processes in order one at a time so as to avoid the need to include explicit locks. In this case a Future provides a way of waiting for a response from an actor.
As far as Hadoop is concerned, Twitter just released a library specifically for Hadoop called Scalding but as long as the library is written for the JVM, it should work with either language.
In looking into Go recently it seems like one could analogize between Go and Scala/Akka,
where an akka Actor is similar to a goroutine and an ActorRef is similar to a Go channel.
Other than platform type issues (JVM or not) what are the functional differences that would lead one to choose one or the other?
Disclaimer: I am the product owner of Akka
You could probably implement the Actor Model on top of goroutines and channels,
but I see them as two distinctly different layers of abstraction.
Questions for the person choosing could be virtually anything but here are some suggestions:
Dev/Deployment platform?
Possibility/desire to reuse other libraries and/or languages?
Remoting/Clustering?
Development environment/infrastructure
Availability of developers
...
If someone knows if there is an Actor Model impl for Golang I'd love a linky.
I feel like Scala / Akka is more mature. There is a bigger use community, and more momentum.
Other people won't agree, but to me Go still feels like a "me too", and I would not code anything serious in a "me too" language.