How are Scala 2.9 parallel collections working behind the scenes? - scala

Scala 2.9 introduced parallel collections. They are a really great tool for certain tasks. However, how do they work internally and am I able to influence the behavior/configuration?
What method do they use to figure out the optimal number of threads? If I am not satisfied with the result are there any configuration parameters to adjust?
I'm not only interested how many threads are actually created, I am also interested in the way how the actual work is distributed amongst them. How the results are collected and how much magic is going on behind the scenes. Does Scala somehow test if a collection is large enough to benefit from parallel processing?

Briefly, there are two orthogonal aspects to how your operations are parallelized:
The extent to which your collection is split into chunks (i.e. the size of the chunks) for a parallelizable operation (such as map or filter)
The number of threads to use for the underlying fork-join pool (on which the parallel tasks are executed)
For #2, this is managed by the pool itself, which discovers the "ideal" level of parallelism at runtime (see java.lang.Runtime.getRuntime.availableProcessors)
For #1, this is a separate problem and the scala parallel collections API does this via the concept of work-stealing (adaptive scheduling). That is, when a particular piece of work is done, a worker will attempt to steal work from other work-queues. If none is available, this is an indication that all of the processors are very busy and hence a bigger chunk of work should be taken.
Aleksandar Prokopec, who implemented the library gave a talk at this year's ScalaDays which will be online shortly. He also gave a great talk at ScalaDays2010 where he describes in detail how the operations are split and re-joined (there are a number of issues that are not immediately obvious and some lovely bits of cleverness in there too!).
A more comprehensive answer is available in the PDF describing the parallel collections API.

Related

Is it possible in Spark to run concurrent jobs on the same SparkSession?

I am an amateur Spark user and Scala. Although I did numerous searches, I could not find my answer.
Is it possible to assign different tasks to different executors at the same time on a single driver program?
for example, Suppose we have 10 nodes. I want to write a code to classify a dataset using Naive Bayes algorithm with five workers and at the same time, I want to assign the other five workers a task to classify the dataset with the decision tree algorithm. Afterward, I will combine the the answers.
HamidReza,
What you want to achieve is running two actions in parallel from your driver. It's definately possible but it only makes sense if your actions are not using the whole cluster (for a better resource management in fact).
You can use concurrency for this. There are many ways of implementing a concurrent program, starting with Futures (I can't really recommend this approach, but seems to be the most popular choice in Scala), to more advanced types like Tasks (you can take a look to popular functional libraries like Monix, Cats or Zio).

The output of the Wordcount is being stored in different files

The output of the WordCount is being stored in multiple files.
However the developer doesn't have control on where(ip,path) the files stay on cluster.
In MapReduce API, there is a provision for developers to write reduce program to address this.How to handle this in ApacheBeam with DirectRunner or any other runners?
Indeed -- the WordCount example pipeline in Apache Beam writes its output using TextIO.Write, which doesn't (by default) specify the number of output shards.
By default, each runner independently decides how many shards to produce, typically based on its internal optimizations. The user can, however, control this via .withNumShards() API, which would force a specific number of shards. Of course, forcing a specific number may require more work from a runner, which may or may not result in a somewhat slower execution.
Regarding "where the files stay on the cluster" -- it is Apache Beam's philosophy that this complexity should be abstracted away from the user. In fact, Apache Beam raises the level of abstraction such that user don't need to worry about this. It is runner's and/or storage system's responsibility to manage this efficiently.
Perhaps to clarify -- we can make an easy parallel with low-level programming (e.g., direct assembly), vs. non-managed programming (e.g., C or C++), vs. managed (e.g., C# or Java). As you go higher in abstraction, you no longer can control data locality (e.g., processor caching), but gain power, ease of use, and portability.

Suggestions on how to go about a simple example displaying scala's multiprocessor capabilities

So i have completed the coursera course on scala and have taken it upon myself to do a small POC showing off the multiprocessor capabilities of scala.
i am looking at creating a very small example where a application can launch multiple tasks(each task will do some network related queries etc) and i can show the usage of multiple cores as well.
Also there will be a thread that will listen on a specific port of a machine and spawn tasks based on what information it receives there.
Any suggestions on how to proceed with this kind of a problem?
I don't want to use AKKA now.
Parallel collections are perhaps the least-effort way to make use of multiple processors in Scala. It naturally leads into how best to organise one's code and data to take advantage of the parallel operations, and more importantly what doesn't get faster.
As a more concrete problem, suppose you have read a CSV file (or XML document, or whatever) and want to parse the data. If the records have already been split into a collection such as a List[String], you can then do .par to create a parallel List, and then a subsequent .map will use all cores where possible. The resulting List[whatever] will retain the same ordering even though the operations were not executed sequentially. Consider summing the values on each line:
val in: List[String] = ...
val out = in.par.map { line =>
val cols = line split ','
cols.map(_.toInt).sum
}
So an in of List("1,2,3", "4,5,6") would result in an out of List(6, 15), as it would without the .par. but it'll run across multiple cores. Whether it's faster is another matter, since there is overhead in using parallel collections that likely makes a trivial example such as this slower. You will want to experiment to see where parallel collections are a benefit for your use cases.
There is a more extensive discussion and documentation of parallel collections at http://docs.scala-lang.org/overviews/parallel-collections/overview.html
What about the sleeping barber problem? You could implement it in a distributed manner over the network, with the barber(s)' spawning service listening on one port and the customers spawning and requesting the barber(s) services over the network.
I think that would be vast and interesting enough while not being impossible.
Then you can build on it to expand it as much as you want, such as adding specialized barbers for different things (hair cut or shaving) and down from there. Sky (or, better, thread's no. cap) is the limit!

What considerations should be taken when deciding wether or not to use Apache Spark?

In the past for job that required a heavy processing load I would use Scala and parallel collections.
I'm currently experimenting with Spark and find it interesting but a steep learning curve. I find the development slower as have to use a reduced Scala API.
What do I need to determine before deciding wether or not to use Spark ?
The current Spark job im trying to implement is processing approx 5GB if data. This data is not huge but I'm running a Cartesian product of this data and this is generating data in excess of 50GB. But maybe using Scala parallel collecitons will be just as fast, I know the dev time to implement the job will be faster from my point of view.
So what considerations should I take into account before deciding to use Spark ?
The main advantages Spark has over traditional high-performance computing frameworks (e.g. MPI) are fault-tolerance, easy integration into the Hadoop stack, and a remarkably active mailing list http://mail-archives.apache.org/mod_mbox/spark-user/ . Getting distributed fault-tolerant in-memory computations to work efficiently isn't easy and it's definitely not something I'd want to implement myself. There's a review of other approaches to the problem in the original paper: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf .
However, when my work is I/O bound, I still tend to rely primarily on pig scripts as pig is more mature and I think the scripts are easier to write. Spark has been great when pig scripts won't cut it (e.g. iterative algorithms, graphs, lots of joins).
Now, if you've only got 50g of data, you probably don't care about distributed fault-tolerant computations (if all your stuff is on a single node then there's no framework in the world that can save you from a node failure :) ) so parallel collections will work just fine.

Distributing Scala over a cluster?

So I've recently started learning Scala and have been using graphs as sort of my project-to-improve-my-Scala, and it's going well - I've since managed to easily parallelize some graph algorithms (that benefit from data parallelization) courtesy of Scala 2.9's amazing support for parallel collections.
However, I want to take this one step further and have it parallelized not just on a single machine but across several. Does Scala offer any clean way to do this like it does with parallel collections, or will I have to wait until I get to the chapter in my book on Actors/learn more about Akka?
Thanks!
-kstruct
There was an attempt of creating distributed collections (currently project is frozen).
Alternatives would be Akka (which recently got really cool addition: Akka Cluster), that you've already mentioned, or full-fledged cluster engines, that is not parallel collections in any sense and more like distributing cluster over the scala but could be used in your task in some way - such as Scoobi for Hadoop, Storm or even Spark (specifically, Bagel for graph processing).
There is also Swarm that was build on top of delimited continuations.
Last but not least is Menthor - authors claiming that it is especially fits graph processing and makes use of Actors.
Since you're aiming to work with graphs you may also consider to look at Cassovary that was recently opensourced by twitter.
Signal-collect is a framework for parallel dataprocessing backed with Akka.
You can use Akka ( http://akka.io ) - it has always been the most advanced and powerful actor and concurrency framework for Scala, and the fresh-baked version 2.0 allows for nice transparent actor remoting, hierarchies and supervision. The canonical way to do parallel computations is to create as many actors as there are parallel parts in your algorithm, optionally spreading them over several machines, send them data to process and then gather the results (see here).